Re: [Freedreno] [v1] drm/msm/disp/dpu1: fix warn stack reported during dpu resume

2021-03-31 Thread Dmitry Baryshkov

On 01/04/2021 01:47, Rob Clark wrote:

On Wed, Mar 31, 2021 at 9:03 AM Dmitry Baryshkov
 wrote:


On 31/03/2021 14:27, Kalyan Thota wrote:

WARN_ON was introduced by the below commit to catch runtime resumes
that are getting triggered before icc path was set.

"drm/msm/disp/dpu1: icc path needs to be set before dpu runtime resume"

For the targets where the bw scaling is not enabled, this WARN_ON is
a false alarm. Fix the WARN condition appropriately.


Should we change all DPU targets to use bw scaling to the mdp from the
mdss nodes? The limitation to sc7180 looks artificial.


yes, we should, this keeps biting us on 845


Done, 
https://lore.kernel.org/linux-arm-msm/20210401020533.3956787-2-dmitry.barysh...@linaro.org/






Reported-by: Steev Klimaszewski 


Please add Fixes: tag as well


Signed-off-by: Kalyan Thota 
---
   drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c  |  8 +---
   drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h  |  9 +
   drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c | 11 ++-
   3 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index cab387f..0071a4d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -294,6 +294,9 @@ static int dpu_kms_parse_data_bus_icc_path(struct dpu_kms 
*dpu_kms)
   struct icc_path *path1;
   struct drm_device *dev = dpu_kms->dev;

+ if (!dpu_supports_bw_scaling(dev))
+ return 0;
+
   path0 = of_icc_get(dev->dev, "mdp0-mem");
   path1 = of_icc_get(dev->dev, "mdp1-mem");

@@ -934,8 +937,7 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
   DPU_DEBUG("REG_DMA is not defined");
   }

- if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))
- dpu_kms_parse_data_bus_icc_path(dpu_kms);
+ dpu_kms_parse_data_bus_icc_path(dpu_kms);

   pm_runtime_get_sync(&dpu_kms->pdev->dev);

@@ -1198,7 +1200,7 @@ static int __maybe_unused dpu_runtime_resume(struct 
device *dev)

   ddev = dpu_kms->dev;

- WARN_ON(!(dpu_kms->num_paths));
+ WARN_ON((dpu_supports_bw_scaling(ddev) && !dpu_kms->num_paths));
   /* Min vote of BW is required before turning on AXI clk */
   for (i = 0; i < dpu_kms->num_paths; i++)
   icc_set_bw(dpu_kms->path[i], 0, Bps_to_icc(MIN_IB_BW));
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
index d6717d6..f7bcc0a 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
@@ -154,6 +154,15 @@ struct vsync_info {

   #define to_dpu_global_state(x) container_of(x, struct dpu_global_state, base)

+/**
+ * dpu_supports_bw_scaling: returns true for drivers that support bw scaling.
+ * @dev: Pointer to drm_device structure
+ */
+static inline int dpu_supports_bw_scaling(struct drm_device *dev)
+{
+ return of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss");
+}
+
   /* Global private object state for tracking resources that are shared across
* multiple kms objects (planes/crtcs/etc).
*/
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
index cd40788..8cd712c 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
@@ -41,6 +41,9 @@ static int dpu_mdss_parse_data_bus_icc_path(struct drm_device 
*dev,
   struct icc_path *path0 = of_icc_get(dev->dev, "mdp0-mem");
   struct icc_path *path1 = of_icc_get(dev->dev, "mdp1-mem");

+ if (dpu_supports_bw_scaling(dev))
+ return 0;
+
   if (IS_ERR_OR_NULL(path0))
   return PTR_ERR_OR_ZERO(path0);

@@ -276,11 +279,9 @@ int dpu_mdss_init(struct drm_device *dev)

   DRM_DEBUG("mapped mdss address space @%pK\n", dpu_mdss->mmio);

- if (!of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss")) {
- ret = dpu_mdss_parse_data_bus_icc_path(dev, dpu_mdss);
- if (ret)
- return ret;
- }
+ ret = dpu_mdss_parse_data_bus_icc_path(dev, dpu_mdss);
+ if (ret)
+ return ret;

   mp = &dpu_mdss->mp;
   ret = msm_dss_parse_clock(pdev, mp);




--
With best wishes
Dmitry



--
With best wishes
Dmitry
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH 2/2] drm/msm/dpu: always use mdp device to scale bandwidth

2021-03-31 Thread Dmitry Baryshkov
Currently DPU driver scales bandwidth and core clock for sc7180 only,
while the rest of chips get static bandwidth votes. Make all chipsets
scale bandwidth and clock per composition requirements like sc7180 does.
Drop old voting path completely.

Tested on RB3 (SDM845) and RB5 (SM8250).

Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c  |  3 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c | 51 +---
 2 files changed, 2 insertions(+), 52 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index 85f2c3564c96..fb061e666faa 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -933,8 +933,7 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
DPU_DEBUG("REG_DMA is not defined");
}
 
-   if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))
-   dpu_kms_parse_data_bus_icc_path(dpu_kms);
+   dpu_kms_parse_data_bus_icc_path(dpu_kms);
 
pm_runtime_get_sync(&dpu_kms->pdev->dev);
 
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
index cd4078807db1..3416e9617ee9 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
@@ -31,40 +31,8 @@ struct dpu_mdss {
void __iomem *mmio;
struct dss_module_power mp;
struct dpu_irq_controller irq_controller;
-   struct icc_path *path[2];
-   u32 num_paths;
 };
 
-static int dpu_mdss_parse_data_bus_icc_path(struct drm_device *dev,
-   struct dpu_mdss *dpu_mdss)
-{
-   struct icc_path *path0 = of_icc_get(dev->dev, "mdp0-mem");
-   struct icc_path *path1 = of_icc_get(dev->dev, "mdp1-mem");
-
-   if (IS_ERR_OR_NULL(path0))
-   return PTR_ERR_OR_ZERO(path0);
-
-   dpu_mdss->path[0] = path0;
-   dpu_mdss->num_paths = 1;
-
-   if (!IS_ERR_OR_NULL(path1)) {
-   dpu_mdss->path[1] = path1;
-   dpu_mdss->num_paths++;
-   }
-
-   return 0;
-}
-
-static void dpu_mdss_icc_request_bw(struct msm_mdss *mdss)
-{
-   struct dpu_mdss *dpu_mdss = to_dpu_mdss(mdss);
-   int i;
-   u64 avg_bw = dpu_mdss->num_paths ? MAX_BW / dpu_mdss->num_paths : 0;
-
-   for (i = 0; i < dpu_mdss->num_paths; i++)
-   icc_set_bw(dpu_mdss->path[i], avg_bw, kBps_to_icc(MAX_BW));
-}
-
 static void dpu_mdss_irq(struct irq_desc *desc)
 {
struct dpu_mdss *dpu_mdss = irq_desc_get_handler_data(desc);
@@ -178,8 +146,6 @@ static int dpu_mdss_enable(struct msm_mdss *mdss)
struct dss_module_power *mp = &dpu_mdss->mp;
int ret;
 
-   dpu_mdss_icc_request_bw(mdss);
-
ret = msm_dss_enable_clk(mp->clk_config, mp->num_clk, true);
if (ret) {
DPU_ERROR("clock enable failed, ret:%d\n", ret);
@@ -213,15 +179,12 @@ static int dpu_mdss_disable(struct msm_mdss *mdss)
 {
struct dpu_mdss *dpu_mdss = to_dpu_mdss(mdss);
struct dss_module_power *mp = &dpu_mdss->mp;
-   int ret, i;
+   int ret;
 
ret = msm_dss_enable_clk(mp->clk_config, mp->num_clk, false);
if (ret)
DPU_ERROR("clock disable failed, ret:%d\n", ret);
 
-   for (i = 0; i < dpu_mdss->num_paths; i++)
-   icc_set_bw(dpu_mdss->path[i], 0, 0);
-
return ret;
 }
 
@@ -232,7 +195,6 @@ static void dpu_mdss_destroy(struct drm_device *dev)
struct dpu_mdss *dpu_mdss = to_dpu_mdss(priv->mdss);
struct dss_module_power *mp = &dpu_mdss->mp;
int irq;
-   int i;
 
pm_runtime_suspend(dev->dev);
pm_runtime_disable(dev->dev);
@@ -242,9 +204,6 @@ static void dpu_mdss_destroy(struct drm_device *dev)
msm_dss_put_clk(mp->clk_config, mp->num_clk);
devm_kfree(&pdev->dev, mp->clk_config);
 
-   for (i = 0; i < dpu_mdss->num_paths; i++)
-   icc_put(dpu_mdss->path[i]);
-
if (dpu_mdss->mmio)
devm_iounmap(&pdev->dev, dpu_mdss->mmio);
dpu_mdss->mmio = NULL;
@@ -276,12 +235,6 @@ int dpu_mdss_init(struct drm_device *dev)
 
DRM_DEBUG("mapped mdss address space @%pK\n", dpu_mdss->mmio);
 
-   if (!of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss")) {
-   ret = dpu_mdss_parse_data_bus_icc_path(dev, dpu_mdss);
-   if (ret)
-   return ret;
-   }
-
mp = &dpu_mdss->mp;
ret = msm_dss_parse_clock(pdev, mp);
if (ret) {
@@ -307,8 +260,6 @@ int dpu_mdss_init(struct drm_device *dev)
 
pm_runtime_enable(dev->dev);
 
-   dpu_mdss_icc_request_bw(priv->mdss);
-
return ret;
 
 irq_error:
-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH 1/2] drm/msm/dpu: fill missing details in hw catalog for sdm845 and sm8[12]50

2021-03-31 Thread Dmitry Baryshkov
Fill clk_inefficiency_factor, bw_inefficiency_factor and
min_prefill_lines in hw catalog data for sdm845 and sm8[12]50.

Efficiency factors are blindly copied from sc7180 data, while
min_prefill_lines is based on downstream display driver.

Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
index 189f3533525c..a9f74c1177dd 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
@@ -817,6 +817,8 @@ static const struct dpu_perf_cfg sdm845_perf_data = {
{.rd_enable = 1, .wr_enable = 1},
{.rd_enable = 1, .wr_enable = 0}
},
+   .clk_inefficiency_factor = 105,
+   .bw_inefficiency_factor = 120,
 };
 
 static const struct dpu_perf_cfg sc7180_perf_data = {
@@ -852,6 +854,7 @@ static const struct dpu_perf_cfg sm8150_perf_data = {
.min_core_ib = 240,
.min_llcc_ib = 80,
.min_dram_ib = 80,
+   .min_prefill_lines = 24,
.danger_lut_tbl = {0xf, 0x, 0x0},
.qos_lut_tbl = {
{.nentry = ARRAY_SIZE(sm8150_qos_linear),
@@ -869,6 +872,8 @@ static const struct dpu_perf_cfg sm8150_perf_data = {
{.rd_enable = 1, .wr_enable = 1},
{.rd_enable = 1, .wr_enable = 0}
},
+   .clk_inefficiency_factor = 105,
+   .bw_inefficiency_factor = 120,
 };
 
 static const struct dpu_perf_cfg sm8250_perf_data = {
@@ -877,6 +882,7 @@ static const struct dpu_perf_cfg sm8250_perf_data = {
.min_core_ib = 480,
.min_llcc_ib = 0,
.min_dram_ib = 80,
+   .min_prefill_lines = 35,
.danger_lut_tbl = {0xf, 0x, 0x0},
.qos_lut_tbl = {
{.nentry = ARRAY_SIZE(sc7180_qos_linear),
@@ -894,6 +900,8 @@ static const struct dpu_perf_cfg sm8250_perf_data = {
{.rd_enable = 1, .wr_enable = 1},
{.rd_enable = 1, .wr_enable = 0}
},
+   .clk_inefficiency_factor = 105,
+   .bw_inefficiency_factor = 120,
 };
 
 /*
-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH v2 4/4] drm/msm: Improved debugfs gem stats

2021-03-31 Thread Rob Clark
From: Rob Clark 

The last patch lost the breakdown of active vs inactive GEM objects in
$debugfs/gem.  But we can add some better stats to summarize not just
active vs inactive, but also purgable/purged to make up for that.

Signed-off-by: Rob Clark 
Tested-by: Douglas Anderson 
Reviewed-by: Douglas Anderson 
---
 drivers/gpu/drm/msm/msm_fb.c  |  3 ++-
 drivers/gpu/drm/msm/msm_gem.c | 31 ---
 drivers/gpu/drm/msm/msm_gem.h | 11 ++-
 3 files changed, 36 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c
index d42f0665359a..91c0e493aed5 100644
--- a/drivers/gpu/drm/msm/msm_fb.c
+++ b/drivers/gpu/drm/msm/msm_fb.c
@@ -33,6 +33,7 @@ static const struct drm_framebuffer_funcs 
msm_framebuffer_funcs = {
 #ifdef CONFIG_DEBUG_FS
 void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m)
 {
+   struct msm_gem_stats stats = {};
int i, n = fb->format->num_planes;
 
seq_printf(m, "fb: %dx%d@%4.4s (%2d, ID:%d)\n",
@@ -42,7 +43,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, 
struct seq_file *m)
for (i = 0; i < n; i++) {
seq_printf(m, "   %d: offset=%d pitch=%d, obj: ",
i, fb->offsets[i], fb->pitches[i]);
-   msm_gem_describe(fb->obj[i], m);
+   msm_gem_describe(fb->obj[i], m, &stats);
}
 }
 #endif
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 7ca30e36..2ecf7f1cef25 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -873,7 +873,8 @@ static void describe_fence(struct dma_fence *fence, const 
char *type,
fence->seqno);
 }
 
-void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
+void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
+   struct msm_gem_stats *stats)
 {
struct msm_gem_object *msm_obj = to_msm_bo(obj);
struct dma_resv *robj = obj->resv;
@@ -885,11 +886,23 @@ void msm_gem_describe(struct drm_gem_object *obj, struct 
seq_file *m)
 
msm_gem_lock(obj);
 
+   stats->all.count++;
+   stats->all.size += obj->size;
+
+   if (is_active(msm_obj)) {
+   stats->active.count++;
+   stats->active.size += obj->size;
+   }
+
switch (msm_obj->madv) {
case __MSM_MADV_PURGED:
+   stats->purged.count++;
+   stats->purged.size += obj->size;
madv = " purged";
break;
case MSM_MADV_DONTNEED:
+   stats->purgable.count++;
+   stats->purgable.size += obj->size;
madv = " purgeable";
break;
case MSM_MADV_WILLNEED:
@@ -956,20 +969,24 @@ void msm_gem_describe(struct drm_gem_object *obj, struct 
seq_file *m)
 
 void msm_gem_describe_objects(struct list_head *list, struct seq_file *m)
 {
+   struct msm_gem_stats stats = {};
struct msm_gem_object *msm_obj;
-   int count = 0;
-   size_t size = 0;
 
seq_puts(m, "   flags   id ref  offset   kaddrsize 
madv  name\n");
list_for_each_entry(msm_obj, list, node) {
struct drm_gem_object *obj = &msm_obj->base;
seq_puts(m, "   ");
-   msm_gem_describe(obj, m);
-   count++;
-   size += obj->size;
+   msm_gem_describe(obj, m, &stats);
}
 
-   seq_printf(m, "Total %d objects, %zu bytes\n", count, size);
+   seq_printf(m, "Total:%4d objects, %9zu bytes\n",
+   stats.all.count, stats.all.size);
+   seq_printf(m, "Active:   %4d objects, %9zu bytes\n",
+   stats.active.count, stats.active.size);
+   seq_printf(m, "Purgable: %4d objects, %9zu bytes\n",
+   stats.purgable.count, stats.purgable.size);
+   seq_printf(m, "Purged:   %4d objects, %9zu bytes\n",
+   stats.purged.count, stats.purged.size);
 }
 #endif
 
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index e6b28edb1db9..7c7d54bad189 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -158,7 +158,16 @@ struct drm_gem_object *msm_gem_import(struct drm_device 
*dev,
 __printf(2, 3)
 void msm_gem_object_set_name(struct drm_gem_object *bo, const char *fmt, ...);
 #ifdef CONFIG_DEBUG_FS
-void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m);
+
+struct msm_gem_stats {
+   struct {
+   unsigned count;
+   size_t size;
+   } all, active, purgable, purged;
+};
+
+void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
+   struct msm_gem_stats *stats);
 void msm_gem_describe_objects(struct list_head *list, struct seq_file *m);
 #endif
 
-- 
2.30.2

___
Freedreno mailing li

[Freedreno] [PATCH v2 3/4] drm/msm: Fix debugfs deadlock

2021-03-31 Thread Rob Clark
From: Rob Clark 

In normal cases the gem obj lock is acquired first before mm_lock.  The
exception is iterating the various object lists.  In the shrinker path,
deadlock is avoided by using msm_gem_trylock() and skipping over objects
that cannot be locked.  But for debugfs the straightforward thing is to
split things out into a separate list of all objects protected by it's
own lock.

Fixes: d984457b31c4 ("drm/msm: Add priv->mm_lock to protect active/inactive 
lists")
Signed-off-by: Rob Clark 
Tested-by: Douglas Anderson 
---
 drivers/gpu/drm/msm/msm_debugfs.c | 14 +++---
 drivers/gpu/drm/msm/msm_drv.c |  3 +++
 drivers/gpu/drm/msm/msm_drv.h |  9 -
 drivers/gpu/drm/msm/msm_gem.c | 14 +-
 drivers/gpu/drm/msm/msm_gem.h | 10 --
 5 files changed, 35 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_debugfs.c 
b/drivers/gpu/drm/msm/msm_debugfs.c
index 85ad0babc326..d611cc8e54a4 100644
--- a/drivers/gpu/drm/msm/msm_debugfs.c
+++ b/drivers/gpu/drm/msm/msm_debugfs.c
@@ -111,23 +111,15 @@ static const struct file_operations msm_gpu_fops = {
 static int msm_gem_show(struct drm_device *dev, struct seq_file *m)
 {
struct msm_drm_private *priv = dev->dev_private;
-   struct msm_gpu *gpu = priv->gpu;
int ret;
 
-   ret = mutex_lock_interruptible(&priv->mm_lock);
+   ret = mutex_lock_interruptible(&priv->obj_lock);
if (ret)
return ret;
 
-   if (gpu) {
-   seq_printf(m, "Active Objects (%s):\n", gpu->name);
-   msm_gem_describe_objects(&gpu->active_list, m);
-   }
-
-   seq_printf(m, "Inactive Objects:\n");
-   msm_gem_describe_objects(&priv->inactive_dontneed, m);
-   msm_gem_describe_objects(&priv->inactive_willneed, m);
+   msm_gem_describe_objects(&priv->objects, m);
 
-   mutex_unlock(&priv->mm_lock);
+   mutex_unlock(&priv->obj_lock);
 
return 0;
 }
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 3462b0ea14c6..1ef1cd0cc714 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -474,6 +474,9 @@ static int msm_drm_init(struct device *dev, const struct 
drm_driver *drv)
 
priv->wq = alloc_ordered_workqueue("msm", 0);
 
+   INIT_LIST_HEAD(&priv->objects);
+   mutex_init(&priv->obj_lock);
+
INIT_LIST_HEAD(&priv->inactive_willneed);
INIT_LIST_HEAD(&priv->inactive_dontneed);
INIT_LIST_HEAD(&priv->inactive_purged);
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index 503168817e24..c84e6f84cb6d 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -174,7 +174,14 @@ struct msm_drm_private {
struct msm_rd_state *hangrd;   /* debugfs to dump hanging submits */
struct msm_perf_state *perf;
 
-   /*
+   /**
+* List of all GEM objects (mainly for debugfs, protected by obj_lock
+* (acquire before per GEM object lock)
+*/
+   struct list_head objects;
+   struct mutex obj_lock;
+
+   /**
 * Lists of inactive GEM objects.  Every bo is either in one of the
 * inactive lists (depending on whether or not it is shrinkable) or
 * gpu->active_list (for the gpu it is active on[1])
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index bec01bb48fce..7ca30e36 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -961,7 +961,7 @@ void msm_gem_describe_objects(struct list_head *list, 
struct seq_file *m)
size_t size = 0;
 
seq_puts(m, "   flags   id ref  offset   kaddrsize 
madv  name\n");
-   list_for_each_entry(msm_obj, list, mm_list) {
+   list_for_each_entry(msm_obj, list, node) {
struct drm_gem_object *obj = &msm_obj->base;
seq_puts(m, "   ");
msm_gem_describe(obj, m);
@@ -980,6 +980,10 @@ void msm_gem_free_object(struct drm_gem_object *obj)
struct drm_device *dev = obj->dev;
struct msm_drm_private *priv = dev->dev_private;
 
+   mutex_lock(&priv->obj_lock);
+   list_del(&msm_obj->node);
+   mutex_unlock(&priv->obj_lock);
+
mutex_lock(&priv->mm_lock);
if (msm_obj->dontneed)
mark_unpurgable(msm_obj);
@@ -1170,6 +1174,10 @@ static struct drm_gem_object *_msm_gem_new(struct 
drm_device *dev,
list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed);
mutex_unlock(&priv->mm_lock);
 
+   mutex_lock(&priv->obj_lock);
+   list_add_tail(&msm_obj->node, &priv->objects);
+   mutex_unlock(&priv->obj_lock);
+
return obj;
 
 fail:
@@ -1240,6 +1248,10 @@ struct drm_gem_object *msm_gem_import(struct drm_device 
*dev,
list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed);
mutex_unlock(&priv->mm_lock);
 
+   mutex_lock(&priv->obj_lock);
+   list_add_tail(&msm_ob

[Freedreno] [PATCH v2 2/4] drm/msm: Avoid mutex in shrinker_count()

2021-03-31 Thread Rob Clark
From: Rob Clark 

When the system is under heavy memory pressure, we can end up with lots
of concurrent calls into the shrinker.  Keeping a running tab on what we
can shrink avoids grabbing a lock in shrinker->count(), and avoids
shrinker->scan() getting called when not profitable.

Also, we can keep purged objects in their own list to avoid re-traversing
them to help cut down time in the critical section further.

Signed-off-by: Rob Clark 
Tested-by: Douglas Anderson 
---
 drivers/gpu/drm/msm/msm_drv.c  |  1 +
 drivers/gpu/drm/msm/msm_drv.h  |  6 ++-
 drivers/gpu/drm/msm/msm_gem.c  | 20 --
 drivers/gpu/drm/msm/msm_gem.h  | 53 --
 drivers/gpu/drm/msm/msm_gem_shrinker.c | 28 ++
 5 files changed, 81 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 4f9fa0189a07..3462b0ea14c6 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -476,6 +476,7 @@ static int msm_drm_init(struct device *dev, const struct 
drm_driver *drv)
 
INIT_LIST_HEAD(&priv->inactive_willneed);
INIT_LIST_HEAD(&priv->inactive_dontneed);
+   INIT_LIST_HEAD(&priv->inactive_purged);
mutex_init(&priv->mm_lock);
 
/* Teach lockdep about lock ordering wrt. shrinker: */
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index a1264cfcac5e..503168817e24 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -179,8 +179,8 @@ struct msm_drm_private {
 * inactive lists (depending on whether or not it is shrinkable) or
 * gpu->active_list (for the gpu it is active on[1])
 *
-* These lists are protected by mm_lock.  If struct_mutex is involved, 
it
-* should be aquired prior to mm_lock.  One should *not* hold mm_lock in
+* These lists are protected by mm_lock (which should be acquired
+* before per GEM object lock).  One should *not* hold mm_lock in
 * get_pages()/vmap()/etc paths, as they can trigger the shrinker.
 *
 * [1] if someone ever added support for the old 2d cores, there could 
be
@@ -188,6 +188,8 @@ struct msm_drm_private {
 */
struct list_head inactive_willneed;  /* inactive + !shrinkable */
struct list_head inactive_dontneed;  /* inactive +  shrinkable */
+   struct list_head inactive_purged;/* inactive +  purged */
+   long shrinkable_count;   /* write access under mm_lock */
struct mutex mm_lock;
 
struct workqueue_struct *wq;
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 9d10739c4eb2..bec01bb48fce 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -719,6 +719,7 @@ void msm_gem_purge(struct drm_gem_object *obj)
put_iova_vmas(obj);
 
msm_obj->madv = __MSM_MADV_PURGED;
+   mark_unpurgable(msm_obj);
 
drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
drm_gem_free_mmap_offset(obj);
@@ -790,10 +791,11 @@ void msm_gem_active_get(struct drm_gem_object *obj, 
struct msm_gpu *gpu)
might_sleep();
WARN_ON(!msm_gem_is_locked(obj));
WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED);
+   WARN_ON(msm_obj->dontneed);
 
if (msm_obj->active_count++ == 0) {
mutex_lock(&priv->mm_lock);
-   list_del_init(&msm_obj->mm_list);
+   list_del(&msm_obj->mm_list);
list_add_tail(&msm_obj->mm_list, &gpu->active_list);
mutex_unlock(&priv->mm_lock);
}
@@ -818,11 +820,19 @@ static void update_inactive(struct msm_gem_object 
*msm_obj)
mutex_lock(&priv->mm_lock);
WARN_ON(msm_obj->active_count != 0);
 
-   list_del_init(&msm_obj->mm_list);
-   if (msm_obj->madv == MSM_MADV_WILLNEED)
+   if (msm_obj->dontneed)
+   mark_unpurgable(msm_obj);
+
+   list_del(&msm_obj->mm_list);
+   if (msm_obj->madv == MSM_MADV_WILLNEED) {
list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed);
-   else
+   } else if (msm_obj->madv == MSM_MADV_DONTNEED) {
list_add_tail(&msm_obj->mm_list, &priv->inactive_dontneed);
+   mark_purgable(msm_obj);
+   } else {
+   WARN_ON(msm_obj->madv != __MSM_MADV_PURGED);
+   list_add_tail(&msm_obj->mm_list, &priv->inactive_purged);
+   }
 
mutex_unlock(&priv->mm_lock);
 }
@@ -971,6 +981,8 @@ void msm_gem_free_object(struct drm_gem_object *obj)
struct msm_drm_private *priv = dev->dev_private;
 
mutex_lock(&priv->mm_lock);
+   if (msm_obj->dontneed)
+   mark_unpurgable(msm_obj);
list_del(&msm_obj->mm_list);
mutex_unlock(&priv->mm_lock);
 
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 7a9107cf1818..13aabfe92dac 100644
--- a/drivers/gp

[Freedreno] [PATCH v2 1/4] drm/msm: Remove unused freed llist node

2021-03-31 Thread Rob Clark
From: Rob Clark 

Unused since commit c951a9b284b9 ("drm/msm: Remove msm_gem_free_work")

Signed-off-by: Rob Clark 
Tested-by: Douglas Anderson 
---
 drivers/gpu/drm/msm/msm_gem.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index b3a0a880cbab..7a9107cf1818 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -78,8 +78,6 @@ struct msm_gem_object {
 
struct list_head vmas;/* list of msm_gem_vma */
 
-   struct llist_node freed;
-
/* For physically contiguous buffers.  Used when we don't have
 * an IOMMU.  Also used for stolen/splashscreen buffer.
 */
-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH v2 0/4] drm/msm: Shrinker (and related) fixes

2021-03-31 Thread Rob Clark
From: Rob Clark 

I've been spending some time looking into how things behave under high
memory pressure.  The first patch is a random cleanup I noticed along
the way.  The second improves the situation significantly when we are
getting shrinker called from many threads in parallel.  And the last
two are $debugfs/gem fixes I needed so I could monitor the state of GEM
objects (ie. how many are active/purgable/purged) while triggering high
memory pressure.

We could probably go a bit further with dropping the mm_lock in the
shrinker->scan() loop, but this is already a pretty big improvement.
The next step is probably actually to add support to unpin/evict
inactive objects.  (We are part way there since we have already de-
coupled the iova lifetime from the pages lifetime, but there are a
few sharp corners to work through.)

Rob Clark (4):
  drm/msm: Remove unused freed llist node
  drm/msm: Avoid mutex in shrinker_count()
  drm/msm: Fix debugfs deadlock
  drm/msm: Improved debugfs gem stats

 drivers/gpu/drm/msm/msm_debugfs.c  | 14 ++---
 drivers/gpu/drm/msm/msm_drv.c  |  4 ++
 drivers/gpu/drm/msm/msm_drv.h  | 15 --
 drivers/gpu/drm/msm/msm_fb.c   |  3 +-
 drivers/gpu/drm/msm/msm_gem.c  | 65 ++-
 drivers/gpu/drm/msm/msm_gem.h  | 72 +++---
 drivers/gpu/drm/msm/msm_gem_shrinker.c | 28 --
 7 files changed, 150 insertions(+), 51 deletions(-)

-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH 2/4] drm/msm: Avoid mutex in shrinker_count()

2021-03-31 Thread Rob Clark
On Wed, Mar 31, 2021 at 4:39 PM Doug Anderson  wrote:
>
> Hi,
>
> On Wed, Mar 31, 2021 at 4:23 PM Rob Clark  wrote:
> >
> > On Wed, Mar 31, 2021 at 3:44 PM Doug Anderson  wrote:
> > >
> > > Hi,
> > >
> > > On Wed, Mar 31, 2021 at 3:14 PM Rob Clark  wrote:
> > > >
> > > > @@ -818,11 +820,19 @@ static void update_inactive(struct msm_gem_object 
> > > > *msm_obj)
> > > > mutex_lock(&priv->mm_lock);
> > > > WARN_ON(msm_obj->active_count != 0);
> > > >
> > > > +   if (msm_obj->dontneed)
> > > > +   mark_unpurgable(msm_obj);
> > > > +
> > > > list_del_init(&msm_obj->mm_list);
> > > > -   if (msm_obj->madv == MSM_MADV_WILLNEED)
> > > > +   if (msm_obj->madv == MSM_MADV_WILLNEED) {
> > > > list_add_tail(&msm_obj->mm_list, 
> > > > &priv->inactive_willneed);
> > > > -   else
> > > > +   } else if (msm_obj->madv == MSM_MADV_DONTNEED) {
> > > > list_add_tail(&msm_obj->mm_list, 
> > > > &priv->inactive_dontneed);
> > > > +   mark_purgable(msm_obj);
> > > > +   } else {
> > > > +   WARN_ON(msm_obj->madv != __MSM_MADV_PURGED);
> > > > +   list_add_tail(&msm_obj->mm_list, 
> > > > &priv->inactive_purged);
> > >
> > > I'm probably being dense, but what's the point of adding it to the
> > > "inactive_purged" list here? You never look at that list, right? You
> > > already did a list_del_init() on this object's list pointer
> > > ("mm_list"). I don't see how adding it to a bogus list helps with
> > > anything.
> >
> > It preserves the "every bo is in one of these lists" statement, but
> > other than that you are right we aren't otherwise doing anything with
> > that list.  (Or we could replace the list_del_init() with list_del()..
> > I tend to instinctively go for list_del_init())
>
> If you really want this list, it wouldn't hurt to at least have a
> comment saying that it's not used for anything so people like me doing
> go trying to figure out what it's used for. ;-)
>
>
> > > > @@ -198,6 +203,33 @@ static inline bool is_vunmapable(struct 
> > > > msm_gem_object *msm_obj)
> > > > return (msm_obj->vmap_count == 0) && msm_obj->vaddr;
> > > >  }
> > > >
> > > > +static inline void mark_purgable(struct msm_gem_object *msm_obj)
> > > > +{
> > > > +   struct msm_drm_private *priv = msm_obj->base.dev->dev_private;
> > > > +
> > > > +   WARN_ON(!mutex_is_locked(&priv->mm_lock));
> > > > +
> > > > +   if (WARN_ON(msm_obj->dontneed))
> > > > +   return;
> > >
> > > The is_purgeable() function also checks other things besides just
> > > "MSM_MADV_DONTNEED". Do we need to check those too? Specifically:
> > >
> > >  msm_obj->sgt && !msm_obj->base.dma_buf && !msm_obj->base.import_attach
> > >
> > > ...or is it just being paranoid?
> > >
> > > I guess I'm just worried that if any of those might be important then
> > > we'll consistently report back that we have a count of things that can
> > > be purged but then scan() won't find anything to do. That wouldn't be
> > > great.
> >
> > Hmm, I thought msm_gem_madvise() returned an error instead of allowing
> > MSM_MADV_DONTNEED to be set on imported/exported dma-bufs.. it
> > probably should to be complete (but userspace already knows not to
> > madvise an imported/exported buffer for other reasons.. ie. we can't
> > let a shared buffer end up in the bo cache).  I'll re-work that a bit.
> >
> > The msm_obj->sgt case is a bit more tricky.. that will be the case of
> > a freshly allocated obj that does not have backing patches yet.  But
> > it seems like enough of a corner case, that I'm happy to live with
> > it.. ie. the tricky thing is not leaking decrements of
> > priv->shrinkable_count or underflowing priv->shrinkable_count, and
> > caring about the !msm_obj->sgt case doubles the number of states an
> > object can be in, and the shrinker->count() return value is just an
> > estimate.
>
> I think it's equally important to make sure that we don't constantly
> have a non-zero count and then have scan() do nothing.  If there's a
> transitory blip then it's fine, but it's not OK if it can be steady
> state. Then you end up with:
>
> 1. How many objects do you have to free? 10
> 2. OK, free some. How many did you free? 0
> 3. Oh. You got more to do, I'll call you again.
> 4. Goto #1
>
> ...and it just keeps looping, right?

Looking more closely at vmscan, it looks like we should return
SHRINK_STOP instead of zero

BR,
-R

>
> As long as you're confident that this case can't happen then we're
> probably fine, but good to be careful. Is there any way we can make
> sure that a "freshly allocated object" isn't ever in the "DONTNEED"
> state?
>
>
> > > > +   priv->shrinkable_count += msm_obj->base.size >> PAGE_SHIFT;
> > > > +   msm_obj->dontneed = true;
> > > > +}
> > > > +
> > > > +static inline void mark_unpurgable(struct msm_gem_object *msm_obj)
> > > > +{
> > > > +   struct msm_drm_private *priv = msm_obj->base.dev->dev_pr

Re: [Freedreno] [PATCH 1/4] drm/msm: Remove unused freed llist node

2021-03-31 Thread Doug Anderson
Hi,

On Wed, Mar 31, 2021 at 3:14 PM Rob Clark  wrote:
>
> From: Rob Clark 
>
> Unused since c951a9b284b907604759628d273901064c60d09f

Not terribly important, but checkpatch always yells at me when I don't
reference commits by saying:

commit c951a9b284b9 ("drm/msm: Remove msm_gem_free_work")


> Signed-off-by: Rob Clark 
> ---
>  drivers/gpu/drm/msm/msm_gem.h | 2 --
>  1 file changed, 2 deletions(-)

Reviewed-by: Douglas Anderson 
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH 0/4] drm/msm: Shrinker (and related) fixes

2021-03-31 Thread Doug Anderson
Hi,

On Wed, Mar 31, 2021 at 3:14 PM Rob Clark  wrote:
>
> From: Rob Clark 
>
> I've been spending some time looking into how things behave under high
> memory pressure.  The first patch is a random cleanup I noticed along
> the way.  The second improves the situation significantly when we are
> getting shrinker called from many threads in parallel.  And the last
> two are $debugfs/gem fixes I needed so I could monitor the state of GEM
> objects (ie. how many are active/purgable/purged) while triggering high
> memory pressure.
>
> We could probably go a bit further with dropping the mm_lock in the
> shrinker->scan() loop, but this is already a pretty big improvement.
> The next step is probably actually to add support to unpin/evict
> inactive objects.  (We are part way there since we have already de-
> coupled the iova lifetime from the pages lifetime, but there are a
> few sharp corners to work through.)
>
> Rob Clark (4):
>   drm/msm: Remove unused freed llist node
>   drm/msm: Avoid mutex in shrinker_count()
>   drm/msm: Fix debugfs deadlock
>   drm/msm: Improved debugfs gem stats
>
>  drivers/gpu/drm/msm/msm_debugfs.c  | 14 ++
>  drivers/gpu/drm/msm/msm_drv.c  |  4 ++
>  drivers/gpu/drm/msm/msm_drv.h  | 10 -
>  drivers/gpu/drm/msm/msm_fb.c   |  3 +-
>  drivers/gpu/drm/msm/msm_gem.c  | 61 +-
>  drivers/gpu/drm/msm/msm_gem.h  | 58 +---
>  drivers/gpu/drm/msm/msm_gem_shrinker.c | 17 +--
>  7 files changed, 122 insertions(+), 45 deletions(-)

This makes a pretty big reduction in jankiness when under memory
pressure and seems to work well for me.

Tested-by: Douglas Anderson 
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH 2/4] drm/msm: Avoid mutex in shrinker_count()

2021-03-31 Thread Doug Anderson
Hi,

On Wed, Mar 31, 2021 at 4:23 PM Rob Clark  wrote:
>
> On Wed, Mar 31, 2021 at 3:44 PM Doug Anderson  wrote:
> >
> > Hi,
> >
> > On Wed, Mar 31, 2021 at 3:14 PM Rob Clark  wrote:
> > >
> > > @@ -818,11 +820,19 @@ static void update_inactive(struct msm_gem_object 
> > > *msm_obj)
> > > mutex_lock(&priv->mm_lock);
> > > WARN_ON(msm_obj->active_count != 0);
> > >
> > > +   if (msm_obj->dontneed)
> > > +   mark_unpurgable(msm_obj);
> > > +
> > > list_del_init(&msm_obj->mm_list);
> > > -   if (msm_obj->madv == MSM_MADV_WILLNEED)
> > > +   if (msm_obj->madv == MSM_MADV_WILLNEED) {
> > > list_add_tail(&msm_obj->mm_list, 
> > > &priv->inactive_willneed);
> > > -   else
> > > +   } else if (msm_obj->madv == MSM_MADV_DONTNEED) {
> > > list_add_tail(&msm_obj->mm_list, 
> > > &priv->inactive_dontneed);
> > > +   mark_purgable(msm_obj);
> > > +   } else {
> > > +   WARN_ON(msm_obj->madv != __MSM_MADV_PURGED);
> > > +   list_add_tail(&msm_obj->mm_list, &priv->inactive_purged);
> >
> > I'm probably being dense, but what's the point of adding it to the
> > "inactive_purged" list here? You never look at that list, right? You
> > already did a list_del_init() on this object's list pointer
> > ("mm_list"). I don't see how adding it to a bogus list helps with
> > anything.
>
> It preserves the "every bo is in one of these lists" statement, but
> other than that you are right we aren't otherwise doing anything with
> that list.  (Or we could replace the list_del_init() with list_del()..
> I tend to instinctively go for list_del_init())

If you really want this list, it wouldn't hurt to at least have a
comment saying that it's not used for anything so people like me doing
go trying to figure out what it's used for. ;-)


> > > @@ -198,6 +203,33 @@ static inline bool is_vunmapable(struct 
> > > msm_gem_object *msm_obj)
> > > return (msm_obj->vmap_count == 0) && msm_obj->vaddr;
> > >  }
> > >
> > > +static inline void mark_purgable(struct msm_gem_object *msm_obj)
> > > +{
> > > +   struct msm_drm_private *priv = msm_obj->base.dev->dev_private;
> > > +
> > > +   WARN_ON(!mutex_is_locked(&priv->mm_lock));
> > > +
> > > +   if (WARN_ON(msm_obj->dontneed))
> > > +   return;
> >
> > The is_purgeable() function also checks other things besides just
> > "MSM_MADV_DONTNEED". Do we need to check those too? Specifically:
> >
> >  msm_obj->sgt && !msm_obj->base.dma_buf && !msm_obj->base.import_attach
> >
> > ...or is it just being paranoid?
> >
> > I guess I'm just worried that if any of those might be important then
> > we'll consistently report back that we have a count of things that can
> > be purged but then scan() won't find anything to do. That wouldn't be
> > great.
>
> Hmm, I thought msm_gem_madvise() returned an error instead of allowing
> MSM_MADV_DONTNEED to be set on imported/exported dma-bufs.. it
> probably should to be complete (but userspace already knows not to
> madvise an imported/exported buffer for other reasons.. ie. we can't
> let a shared buffer end up in the bo cache).  I'll re-work that a bit.
>
> The msm_obj->sgt case is a bit more tricky.. that will be the case of
> a freshly allocated obj that does not have backing patches yet.  But
> it seems like enough of a corner case, that I'm happy to live with
> it.. ie. the tricky thing is not leaking decrements of
> priv->shrinkable_count or underflowing priv->shrinkable_count, and
> caring about the !msm_obj->sgt case doubles the number of states an
> object can be in, and the shrinker->count() return value is just an
> estimate.

I think it's equally important to make sure that we don't constantly
have a non-zero count and then have scan() do nothing.  If there's a
transitory blip then it's fine, but it's not OK if it can be steady
state. Then you end up with:

1. How many objects do you have to free? 10
2. OK, free some. How many did you free? 0
3. Oh. You got more to do, I'll call you again.
4. Goto #1

...and it just keeps looping, right?

As long as you're confident that this case can't happen then we're
probably fine, but good to be careful. Is there any way we can make
sure that a "freshly allocated object" isn't ever in the "DONTNEED"
state?


> > > +   priv->shrinkable_count += msm_obj->base.size >> PAGE_SHIFT;
> > > +   msm_obj->dontneed = true;
> > > +}
> > > +
> > > +static inline void mark_unpurgable(struct msm_gem_object *msm_obj)
> > > +{
> > > +   struct msm_drm_private *priv = msm_obj->base.dev->dev_private;
> > > +
> > > +   WARN_ON(!mutex_is_locked(&priv->mm_lock));
> > > +
> > > +   if (WARN_ON(!msm_obj->dontneed))
> > > +   return;
> > > +
> > > +   priv->shrinkable_count -= msm_obj->base.size >> PAGE_SHIFT;
> > > +   WARN_ON(priv->shrinkable_count < 0);
> >
> > If you changed the order maybe you could make shrinkable_count
> > "unsigned long" to 

Re: [Freedreno] [PATCH 4/4] drm/msm: Improved debugfs gem stats

2021-03-31 Thread Doug Anderson
Hi,

On Wed, Mar 31, 2021 at 3:14 PM Rob Clark  wrote:
>
> From: Rob Clark 
>
> The last patch lost the breakdown of active vs inactive GEM objects in
> $debugfs/gem.  But we can add some better stats to summarize not just
> active vs inactive, but also purgable/purged to make up for that.
>
> Signed-off-by: Rob Clark 
> ---
>  drivers/gpu/drm/msm/msm_fb.c  |  3 ++-
>  drivers/gpu/drm/msm/msm_gem.c | 31 ---
>  drivers/gpu/drm/msm/msm_gem.h | 11 ++-
>  3 files changed, 36 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c
> index d42f0665359a..887172a10c9a 100644
> --- a/drivers/gpu/drm/msm/msm_fb.c
> +++ b/drivers/gpu/drm/msm/msm_fb.c
> @@ -33,6 +33,7 @@ static const struct drm_framebuffer_funcs 
> msm_framebuffer_funcs = {
>  #ifdef CONFIG_DEBUG_FS
>  void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m)
>  {
> +   struct msm_gem_stats stats = {{0}};

nit: instead of "{{0}}", can't you just do:

struct msm_gem_stats stats = {};

...both here and for the other usage.

Other than that this seems good to me.

Reviewed-by: Douglas Anderson 
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH 3/4] drm/msm: Fix debugfs deadlock

2021-03-31 Thread Rob Clark
On Wed, Mar 31, 2021 at 4:13 PM Doug Anderson  wrote:
>
> Hi,
>
> On Wed, Mar 31, 2021 at 3:14 PM Rob Clark  wrote:
> >
> > @@ -111,23 +111,15 @@ static const struct file_operations msm_gpu_fops = {
> >  static int msm_gem_show(struct drm_device *dev, struct seq_file *m)
> >  {
> > struct msm_drm_private *priv = dev->dev_private;
> > -   struct msm_gpu *gpu = priv->gpu;
> > int ret;
> >
> > -   ret = mutex_lock_interruptible(&priv->mm_lock);
> > +   ret = mutex_lock_interruptible(&priv->obj_lock);
> > if (ret)
> > return ret;
> >
> > -   if (gpu) {
> > -   seq_printf(m, "Active Objects (%s):\n", gpu->name);
> > -   msm_gem_describe_objects(&gpu->active_list, m);
> > -   }
> > -
> > -   seq_printf(m, "Inactive Objects:\n");
> > -   msm_gem_describe_objects(&priv->inactive_dontneed, m);
> > -   msm_gem_describe_objects(&priv->inactive_willneed, m);
> > +   msm_gem_describe_objects(&priv->objects, m);
>
> I guess we no longer sort the by Active and Inactive but that doesn't
> really matter?

It turned out to be less useful to sort by active/inactive, as much as
just having the summary at the bottom that the last patch adds.  We
can already tell from the per-object entries whether it is
active/purgable/purged.

I did initially try to come up with an approach that let me keep this,
but it would basically amount to re-writing the gem_submit path
(because you cannot do any memory allocation under mm_lock)

>
> > @@ -174,7 +174,13 @@ struct msm_drm_private {
> > struct msm_rd_state *hangrd;   /* debugfs to dump hanging submits */
> > struct msm_perf_state *perf;
> >
> > -   /*
> > +   /**
> > +* List of all GEM objects (mainly for debugfs, protected by 
> > obj_lock
>
> It wouldn't hurt to talk about lock ordering here? Like: "If we need
> the "obj_lock" and a "gem_lock" at the same time we always grab the
> "obj_lock" first.

good point

>
> > @@ -60,13 +60,20 @@ struct msm_gem_object {
> >  */
> > uint8_t vmap_count;
> >
> > -   /* And object is either:
> > -*  inactive - on priv->inactive_list
> > +   /**
> > +* Node in list of all objects (mainly for debugfs, protected by
> > +* struct_mutex
>
> Not "struct_mutex" in comment, right? Maybe "obj_lock" I think?

oh, right, forgot to fix that from an earlier iteration

BR,
-R
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH 2/4] drm/msm: Avoid mutex in shrinker_count()

2021-03-31 Thread Rob Clark
On Wed, Mar 31, 2021 at 3:44 PM Doug Anderson  wrote:
>
> Hi,
>
> On Wed, Mar 31, 2021 at 3:14 PM Rob Clark  wrote:
> >
> > @@ -818,11 +820,19 @@ static void update_inactive(struct msm_gem_object 
> > *msm_obj)
> > mutex_lock(&priv->mm_lock);
> > WARN_ON(msm_obj->active_count != 0);
> >
> > +   if (msm_obj->dontneed)
> > +   mark_unpurgable(msm_obj);
> > +
> > list_del_init(&msm_obj->mm_list);
> > -   if (msm_obj->madv == MSM_MADV_WILLNEED)
> > +   if (msm_obj->madv == MSM_MADV_WILLNEED) {
> > list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed);
> > -   else
> > +   } else if (msm_obj->madv == MSM_MADV_DONTNEED) {
> > list_add_tail(&msm_obj->mm_list, &priv->inactive_dontneed);
> > +   mark_purgable(msm_obj);
> > +   } else {
> > +   WARN_ON(msm_obj->madv != __MSM_MADV_PURGED);
> > +   list_add_tail(&msm_obj->mm_list, &priv->inactive_purged);
>
> I'm probably being dense, but what's the point of adding it to the
> "inactive_purged" list here? You never look at that list, right? You
> already did a list_del_init() on this object's list pointer
> ("mm_list"). I don't see how adding it to a bogus list helps with
> anything.

It preserves the "every bo is in one of these lists" statement, but
other than that you are right we aren't otherwise doing anything with
that list.  (Or we could replace the list_del_init() with list_del()..
I tend to instinctively go for list_del_init())

>
> > @@ -198,6 +203,33 @@ static inline bool is_vunmapable(struct msm_gem_object 
> > *msm_obj)
> > return (msm_obj->vmap_count == 0) && msm_obj->vaddr;
> >  }
> >
> > +static inline void mark_purgable(struct msm_gem_object *msm_obj)
> > +{
> > +   struct msm_drm_private *priv = msm_obj->base.dev->dev_private;
> > +
> > +   WARN_ON(!mutex_is_locked(&priv->mm_lock));
> > +
> > +   if (WARN_ON(msm_obj->dontneed))
> > +   return;
>
> The is_purgeable() function also checks other things besides just
> "MSM_MADV_DONTNEED". Do we need to check those too? Specifically:
>
>  msm_obj->sgt && !msm_obj->base.dma_buf && !msm_obj->base.import_attach
>
> ...or is it just being paranoid?
>
> I guess I'm just worried that if any of those might be important then
> we'll consistently report back that we have a count of things that can
> be purged but then scan() won't find anything to do. That wouldn't be
> great.

Hmm, I thought msm_gem_madvise() returned an error instead of allowing
MSM_MADV_DONTNEED to be set on imported/exported dma-bufs.. it
probably should to be complete (but userspace already knows not to
madvise an imported/exported buffer for other reasons.. ie. we can't
let a shared buffer end up in the bo cache).  I'll re-work that a bit.

The msm_obj->sgt case is a bit more tricky.. that will be the case of
a freshly allocated obj that does not have backing patches yet.  But
it seems like enough of a corner case, that I'm happy to live with
it.. ie. the tricky thing is not leaking decrements of
priv->shrinkable_count or underflowing priv->shrinkable_count, and
caring about the !msm_obj->sgt case doubles the number of states an
object can be in, and the shrinker->count() return value is just an
estimate.

>
> > +   priv->shrinkable_count += msm_obj->base.size >> PAGE_SHIFT;
> > +   msm_obj->dontneed = true;
> > +}
> > +
> > +static inline void mark_unpurgable(struct msm_gem_object *msm_obj)
> > +{
> > +   struct msm_drm_private *priv = msm_obj->base.dev->dev_private;
> > +
> > +   WARN_ON(!mutex_is_locked(&priv->mm_lock));
> > +
> > +   if (WARN_ON(!msm_obj->dontneed))
> > +   return;
> > +
> > +   priv->shrinkable_count -= msm_obj->base.size >> PAGE_SHIFT;
> > +   WARN_ON(priv->shrinkable_count < 0);
>
> If you changed the order maybe you could make shrinkable_count
> "unsigned long" to match the shrinker API?
>
>  new_shrinkable = msm_obj->base.size >> PAGE_SHIFT;
>  WARN_ON(new_shrinkable > priv->shrinkable_count);
>  priv->shrinkable_count -= new_shrinkable
>

True, although I've developed a preference for signed integers in
cases where it can underflow if you mess up

BR,
-R
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH 3/4] drm/msm: Fix debugfs deadlock

2021-03-31 Thread Doug Anderson
Hi,

On Wed, Mar 31, 2021 at 3:14 PM Rob Clark  wrote:
>
> @@ -111,23 +111,15 @@ static const struct file_operations msm_gpu_fops = {
>  static int msm_gem_show(struct drm_device *dev, struct seq_file *m)
>  {
> struct msm_drm_private *priv = dev->dev_private;
> -   struct msm_gpu *gpu = priv->gpu;
> int ret;
>
> -   ret = mutex_lock_interruptible(&priv->mm_lock);
> +   ret = mutex_lock_interruptible(&priv->obj_lock);
> if (ret)
> return ret;
>
> -   if (gpu) {
> -   seq_printf(m, "Active Objects (%s):\n", gpu->name);
> -   msm_gem_describe_objects(&gpu->active_list, m);
> -   }
> -
> -   seq_printf(m, "Inactive Objects:\n");
> -   msm_gem_describe_objects(&priv->inactive_dontneed, m);
> -   msm_gem_describe_objects(&priv->inactive_willneed, m);
> +   msm_gem_describe_objects(&priv->objects, m);

I guess we no longer sort the by Active and Inactive but that doesn't
really matter?


> @@ -174,7 +174,13 @@ struct msm_drm_private {
> struct msm_rd_state *hangrd;   /* debugfs to dump hanging submits */
> struct msm_perf_state *perf;
>
> -   /*
> +   /**
> +* List of all GEM objects (mainly for debugfs, protected by obj_lock

It wouldn't hurt to talk about lock ordering here? Like: "If we need
the "obj_lock" and a "gem_lock" at the same time we always grab the
"obj_lock" first.

> @@ -60,13 +60,20 @@ struct msm_gem_object {
>  */
> uint8_t vmap_count;
>
> -   /* And object is either:
> -*  inactive - on priv->inactive_list
> +   /**
> +* Node in list of all objects (mainly for debugfs, protected by
> +* struct_mutex

Not "struct_mutex" in comment, right? Maybe "obj_lock" I think?

-Doug
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH 2/4] drm/msm: Avoid mutex in shrinker_count()

2021-03-31 Thread Doug Anderson
Hi,

On Wed, Mar 31, 2021 at 3:14 PM Rob Clark  wrote:
>
> @@ -818,11 +820,19 @@ static void update_inactive(struct msm_gem_object 
> *msm_obj)
> mutex_lock(&priv->mm_lock);
> WARN_ON(msm_obj->active_count != 0);
>
> +   if (msm_obj->dontneed)
> +   mark_unpurgable(msm_obj);
> +
> list_del_init(&msm_obj->mm_list);
> -   if (msm_obj->madv == MSM_MADV_WILLNEED)
> +   if (msm_obj->madv == MSM_MADV_WILLNEED) {
> list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed);
> -   else
> +   } else if (msm_obj->madv == MSM_MADV_DONTNEED) {
> list_add_tail(&msm_obj->mm_list, &priv->inactive_dontneed);
> +   mark_purgable(msm_obj);
> +   } else {
> +   WARN_ON(msm_obj->madv != __MSM_MADV_PURGED);
> +   list_add_tail(&msm_obj->mm_list, &priv->inactive_purged);

I'm probably being dense, but what's the point of adding it to the
"inactive_purged" list here? You never look at that list, right? You
already did a list_del_init() on this object's list pointer
("mm_list"). I don't see how adding it to a bogus list helps with
anything.


> @@ -198,6 +203,33 @@ static inline bool is_vunmapable(struct msm_gem_object 
> *msm_obj)
> return (msm_obj->vmap_count == 0) && msm_obj->vaddr;
>  }
>
> +static inline void mark_purgable(struct msm_gem_object *msm_obj)
> +{
> +   struct msm_drm_private *priv = msm_obj->base.dev->dev_private;
> +
> +   WARN_ON(!mutex_is_locked(&priv->mm_lock));
> +
> +   if (WARN_ON(msm_obj->dontneed))
> +   return;

The is_purgeable() function also checks other things besides just
"MSM_MADV_DONTNEED". Do we need to check those too? Specifically:

 msm_obj->sgt && !msm_obj->base.dma_buf && !msm_obj->base.import_attach

...or is it just being paranoid?

I guess I'm just worried that if any of those might be important then
we'll consistently report back that we have a count of things that can
be purged but then scan() won't find anything to do. That wouldn't be
great.


> +   priv->shrinkable_count += msm_obj->base.size >> PAGE_SHIFT;
> +   msm_obj->dontneed = true;
> +}
> +
> +static inline void mark_unpurgable(struct msm_gem_object *msm_obj)
> +{
> +   struct msm_drm_private *priv = msm_obj->base.dev->dev_private;
> +
> +   WARN_ON(!mutex_is_locked(&priv->mm_lock));
> +
> +   if (WARN_ON(!msm_obj->dontneed))
> +   return;
> +
> +   priv->shrinkable_count -= msm_obj->base.size >> PAGE_SHIFT;
> +   WARN_ON(priv->shrinkable_count < 0);

If you changed the order maybe you could make shrinkable_count
"unsigned long" to match the shrinker API?

 new_shrinkable = msm_obj->base.size >> PAGE_SHIFT;
 WARN_ON(new_shrinkable > priv->shrinkable_count);
 priv->shrinkable_count -= new_shrinkable


-Doug
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [v1] drm/msm/disp/dpu1: fix warn stack reported during dpu resume

2021-03-31 Thread Rob Clark
On Wed, Mar 31, 2021 at 9:03 AM Dmitry Baryshkov
 wrote:
>
> On 31/03/2021 14:27, Kalyan Thota wrote:
> > WARN_ON was introduced by the below commit to catch runtime resumes
> > that are getting triggered before icc path was set.
> >
> > "drm/msm/disp/dpu1: icc path needs to be set before dpu runtime resume"
> >
> > For the targets where the bw scaling is not enabled, this WARN_ON is
> > a false alarm. Fix the WARN condition appropriately.
>
> Should we change all DPU targets to use bw scaling to the mdp from the
> mdss nodes? The limitation to sc7180 looks artificial.

yes, we should, this keeps biting us on 845

> >
> > Reported-by: Steev Klimaszewski 

Please add Fixes: tag as well

> > Signed-off-by: Kalyan Thota 
> > ---
> >   drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c  |  8 +---
> >   drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h  |  9 +
> >   drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c | 11 ++-
> >   3 files changed, 20 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
> > b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
> > index cab387f..0071a4d 100644
> > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
> > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
> > @@ -294,6 +294,9 @@ static int dpu_kms_parse_data_bus_icc_path(struct 
> > dpu_kms *dpu_kms)
> >   struct icc_path *path1;
> >   struct drm_device *dev = dpu_kms->dev;
> >
> > + if (!dpu_supports_bw_scaling(dev))
> > + return 0;
> > +
> >   path0 = of_icc_get(dev->dev, "mdp0-mem");
> >   path1 = of_icc_get(dev->dev, "mdp1-mem");
> >
> > @@ -934,8 +937,7 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
> >   DPU_DEBUG("REG_DMA is not defined");
> >   }
> >
> > - if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))
> > - dpu_kms_parse_data_bus_icc_path(dpu_kms);
> > + dpu_kms_parse_data_bus_icc_path(dpu_kms);
> >
> >   pm_runtime_get_sync(&dpu_kms->pdev->dev);
> >
> > @@ -1198,7 +1200,7 @@ static int __maybe_unused dpu_runtime_resume(struct 
> > device *dev)
> >
> >   ddev = dpu_kms->dev;
> >
> > - WARN_ON(!(dpu_kms->num_paths));
> > + WARN_ON((dpu_supports_bw_scaling(ddev) && !dpu_kms->num_paths));
> >   /* Min vote of BW is required before turning on AXI clk */
> >   for (i = 0; i < dpu_kms->num_paths; i++)
> >   icc_set_bw(dpu_kms->path[i], 0, Bps_to_icc(MIN_IB_BW));
> > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h 
> > b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
> > index d6717d6..f7bcc0a 100644
> > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
> > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
> > @@ -154,6 +154,15 @@ struct vsync_info {
> >
> >   #define to_dpu_global_state(x) container_of(x, struct dpu_global_state, 
> > base)
> >
> > +/**
> > + * dpu_supports_bw_scaling: returns true for drivers that support bw 
> > scaling.
> > + * @dev: Pointer to drm_device structure
> > + */
> > +static inline int dpu_supports_bw_scaling(struct drm_device *dev)
> > +{
> > + return of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss");
> > +}
> > +
> >   /* Global private object state for tracking resources that are shared 
> > across
> >* multiple kms objects (planes/crtcs/etc).
> >*/
> > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c 
> > b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> > index cd40788..8cd712c 100644
> > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> > @@ -41,6 +41,9 @@ static int dpu_mdss_parse_data_bus_icc_path(struct 
> > drm_device *dev,
> >   struct icc_path *path0 = of_icc_get(dev->dev, "mdp0-mem");
> >   struct icc_path *path1 = of_icc_get(dev->dev, "mdp1-mem");
> >
> > + if (dpu_supports_bw_scaling(dev))
> > + return 0;
> > +
> >   if (IS_ERR_OR_NULL(path0))
> >   return PTR_ERR_OR_ZERO(path0);
> >
> > @@ -276,11 +279,9 @@ int dpu_mdss_init(struct drm_device *dev)
> >
> >   DRM_DEBUG("mapped mdss address space @%pK\n", dpu_mdss->mmio);
> >
> > - if (!of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss")) {
> > - ret = dpu_mdss_parse_data_bus_icc_path(dev, dpu_mdss);
> > - if (ret)
> > - return ret;
> > - }
> > + ret = dpu_mdss_parse_data_bus_icc_path(dev, dpu_mdss);
> > + if (ret)
> > + return ret;
> >
> >   mp = &dpu_mdss->mp;
> >   ret = msm_dss_parse_clock(pdev, mp);
> >
>
>
> --
> With best wishes
> Dmitry
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH] mailmap: Update email address for Jordan Crouse

2021-03-31 Thread Rob Clark
On Thu, Mar 25, 2021 at 7:37 AM Jordan Crouse  wrote:
>
> jcrouse at codeaurora.org ha started bouncing. Redirect to a

nit: s/ha/has/

> more permanent address.
>
> Signed-off-by: Jordan Crouse 

Acked-by: Rob Clark 

> ---
>
>  .mailmap | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/.mailmap b/.mailmap
> index 85b93cdefc87..8c489cb1d1ce 100644
> --- a/.mailmap
> +++ b/.mailmap
> @@ -165,6 +165,7 @@ Johan Hovold  
>  Johan Hovold  
>  John Paul Adrian Glaubitz 
>  John Stultz 
> +Jordan Crouse  
>   
>   
>   
> --
> 2.25.1
>
> ___
> Freedreno mailing list
> Freedreno@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/freedreno
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH 4/4] drm/msm: Improved debugfs gem stats

2021-03-31 Thread Rob Clark
From: Rob Clark 

The last patch lost the breakdown of active vs inactive GEM objects in
$debugfs/gem.  But we can add some better stats to summarize not just
active vs inactive, but also purgable/purged to make up for that.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_fb.c  |  3 ++-
 drivers/gpu/drm/msm/msm_gem.c | 31 ---
 drivers/gpu/drm/msm/msm_gem.h | 11 ++-
 3 files changed, 36 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c
index d42f0665359a..887172a10c9a 100644
--- a/drivers/gpu/drm/msm/msm_fb.c
+++ b/drivers/gpu/drm/msm/msm_fb.c
@@ -33,6 +33,7 @@ static const struct drm_framebuffer_funcs 
msm_framebuffer_funcs = {
 #ifdef CONFIG_DEBUG_FS
 void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m)
 {
+   struct msm_gem_stats stats = {{0}};
int i, n = fb->format->num_planes;
 
seq_printf(m, "fb: %dx%d@%4.4s (%2d, ID:%d)\n",
@@ -42,7 +43,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, 
struct seq_file *m)
for (i = 0; i < n; i++) {
seq_printf(m, "   %d: offset=%d pitch=%d, obj: ",
i, fb->offsets[i], fb->pitches[i]);
-   msm_gem_describe(fb->obj[i], m);
+   msm_gem_describe(fb->obj[i], m, &stats);
}
 }
 #endif
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index c184ea68a6d0..a933ca5dc6df 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -873,7 +873,8 @@ static void describe_fence(struct dma_fence *fence, const 
char *type,
fence->seqno);
 }
 
-void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
+void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
+   struct msm_gem_stats *stats)
 {
struct msm_gem_object *msm_obj = to_msm_bo(obj);
struct dma_resv *robj = obj->resv;
@@ -885,11 +886,23 @@ void msm_gem_describe(struct drm_gem_object *obj, struct 
seq_file *m)
 
msm_gem_lock(obj);
 
+   stats->all.count++;
+   stats->all.size += obj->size;
+
+   if (is_active(msm_obj)) {
+   stats->active.count++;
+   stats->active.size += obj->size;
+   }
+
switch (msm_obj->madv) {
case __MSM_MADV_PURGED:
+   stats->purged.count++;
+   stats->purged.size += obj->size;
madv = " purged";
break;
case MSM_MADV_DONTNEED:
+   stats->purgable.count++;
+   stats->purgable.size += obj->size;
madv = " purgeable";
break;
case MSM_MADV_WILLNEED:
@@ -956,20 +969,24 @@ void msm_gem_describe(struct drm_gem_object *obj, struct 
seq_file *m)
 
 void msm_gem_describe_objects(struct list_head *list, struct seq_file *m)
 {
+   struct msm_gem_stats stats = {{0}};
struct msm_gem_object *msm_obj;
-   int count = 0;
-   size_t size = 0;
 
seq_puts(m, "   flags   id ref  offset   kaddrsize 
madv  name\n");
list_for_each_entry(msm_obj, list, node) {
struct drm_gem_object *obj = &msm_obj->base;
seq_puts(m, "   ");
-   msm_gem_describe(obj, m);
-   count++;
-   size += obj->size;
+   msm_gem_describe(obj, m, &stats);
}
 
-   seq_printf(m, "Total %d objects, %zu bytes\n", count, size);
+   seq_printf(m, "Total:%4d objects, %9zu bytes\n",
+   stats.all.count, stats.all.size);
+   seq_printf(m, "Active:   %4d objects, %9zu bytes\n",
+   stats.active.count, stats.active.size);
+   seq_printf(m, "Purgable: %4d objects, %9zu bytes\n",
+   stats.purgable.count, stats.purgable.size);
+   seq_printf(m, "Purged:   %4d objects, %9zu bytes\n",
+   stats.purged.count, stats.purged.size);
 }
 #endif
 
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 49956196025e..43510ac070dd 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -158,7 +158,16 @@ struct drm_gem_object *msm_gem_import(struct drm_device 
*dev,
 __printf(2, 3)
 void msm_gem_object_set_name(struct drm_gem_object *bo, const char *fmt, ...);
 #ifdef CONFIG_DEBUG_FS
-void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m);
+
+struct msm_gem_stats {
+   struct {
+   unsigned count;
+   size_t size;
+   } all, active, purgable, purged;
+};
+
+void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
+   struct msm_gem_stats *stats);
 void msm_gem_describe_objects(struct list_head *list, struct seq_file *m);
 #endif
 
-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freed

[Freedreno] [PATCH 2/4] drm/msm: Avoid mutex in shrinker_count()

2021-03-31 Thread Rob Clark
From: Rob Clark 

When the system is under heavy memory pressure, we can end up with lots
of concurrent calls into the shrinker.  Keeping a running tab on what we
can shrink avoids grabbing a lock in shrinker->count(), and avoids
shrinker->scan() getting called when not profitable.

Also, we can keep purged objects in their own list to avoid re-traversing
them to help cut down time in the critical section further.

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_drv.c  |  1 +
 drivers/gpu/drm/msm/msm_drv.h  |  2 ++
 drivers/gpu/drm/msm/msm_gem.c  | 16 +++--
 drivers/gpu/drm/msm/msm_gem.h  | 32 ++
 drivers/gpu/drm/msm/msm_gem_shrinker.c | 17 +-
 5 files changed, 50 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 4f9fa0189a07..3462b0ea14c6 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -476,6 +476,7 @@ static int msm_drm_init(struct device *dev, const struct 
drm_driver *drv)
 
INIT_LIST_HEAD(&priv->inactive_willneed);
INIT_LIST_HEAD(&priv->inactive_dontneed);
+   INIT_LIST_HEAD(&priv->inactive_purged);
mutex_init(&priv->mm_lock);
 
/* Teach lockdep about lock ordering wrt. shrinker: */
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index a1264cfcac5e..3ead5755f695 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -188,6 +188,8 @@ struct msm_drm_private {
 */
struct list_head inactive_willneed;  /* inactive + !shrinkable */
struct list_head inactive_dontneed;  /* inactive +  shrinkable */
+   struct list_head inactive_purged;/* inactive +  purged */
+   int shrinkable_count;/* write access under mm_lock */
struct mutex mm_lock;
 
struct workqueue_struct *wq;
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 9d10739c4eb2..74a92eedc992 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -719,6 +719,7 @@ void msm_gem_purge(struct drm_gem_object *obj)
put_iova_vmas(obj);
 
msm_obj->madv = __MSM_MADV_PURGED;
+   mark_unpurgable(msm_obj);
 
drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
drm_gem_free_mmap_offset(obj);
@@ -790,6 +791,7 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct 
msm_gpu *gpu)
might_sleep();
WARN_ON(!msm_gem_is_locked(obj));
WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED);
+   WARN_ON(msm_obj->dontneed);
 
if (msm_obj->active_count++ == 0) {
mutex_lock(&priv->mm_lock);
@@ -818,11 +820,19 @@ static void update_inactive(struct msm_gem_object 
*msm_obj)
mutex_lock(&priv->mm_lock);
WARN_ON(msm_obj->active_count != 0);
 
+   if (msm_obj->dontneed)
+   mark_unpurgable(msm_obj);
+
list_del_init(&msm_obj->mm_list);
-   if (msm_obj->madv == MSM_MADV_WILLNEED)
+   if (msm_obj->madv == MSM_MADV_WILLNEED) {
list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed);
-   else
+   } else if (msm_obj->madv == MSM_MADV_DONTNEED) {
list_add_tail(&msm_obj->mm_list, &priv->inactive_dontneed);
+   mark_purgable(msm_obj);
+   } else {
+   WARN_ON(msm_obj->madv != __MSM_MADV_PURGED);
+   list_add_tail(&msm_obj->mm_list, &priv->inactive_purged);
+   }
 
mutex_unlock(&priv->mm_lock);
 }
@@ -971,6 +981,8 @@ void msm_gem_free_object(struct drm_gem_object *obj)
struct msm_drm_private *priv = dev->dev_private;
 
mutex_lock(&priv->mm_lock);
+   if (msm_obj->dontneed)
+   mark_unpurgable(msm_obj);
list_del(&msm_obj->mm_list);
mutex_unlock(&priv->mm_lock);
 
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 7a9107cf1818..0feabae75d3d 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -50,6 +50,11 @@ struct msm_gem_object {
 */
uint8_t madv;
 
+   /**
+* Is object on inactive_dontneed list (ie. counted in 
priv->shrinkable_count)?
+*/
+   bool dontneed : 1;
+
/**
 * count of active vmap'ing
 */
@@ -198,6 +203,33 @@ static inline bool is_vunmapable(struct msm_gem_object 
*msm_obj)
return (msm_obj->vmap_count == 0) && msm_obj->vaddr;
 }
 
+static inline void mark_purgable(struct msm_gem_object *msm_obj)
+{
+   struct msm_drm_private *priv = msm_obj->base.dev->dev_private;
+
+   WARN_ON(!mutex_is_locked(&priv->mm_lock));
+
+   if (WARN_ON(msm_obj->dontneed))
+   return;
+
+   priv->shrinkable_count += msm_obj->base.size >> PAGE_SHIFT;
+   msm_obj->dontneed = true;
+}
+
+static inline void mark_unpurgable(struct msm_gem_object *msm_obj)
+{
+   struct msm_drm_private 

[Freedreno] [PATCH 0/4] drm/msm: Shrinker (and related) fixes

2021-03-31 Thread Rob Clark
From: Rob Clark 

I've been spending some time looking into how things behave under high
memory pressure.  The first patch is a random cleanup I noticed along
the way.  The second improves the situation significantly when we are
getting shrinker called from many threads in parallel.  And the last
two are $debugfs/gem fixes I needed so I could monitor the state of GEM
objects (ie. how many are active/purgable/purged) while triggering high
memory pressure.

We could probably go a bit further with dropping the mm_lock in the
shrinker->scan() loop, but this is already a pretty big improvement.
The next step is probably actually to add support to unpin/evict
inactive objects.  (We are part way there since we have already de-
coupled the iova lifetime from the pages lifetime, but there are a
few sharp corners to work through.)

Rob Clark (4):
  drm/msm: Remove unused freed llist node
  drm/msm: Avoid mutex in shrinker_count()
  drm/msm: Fix debugfs deadlock
  drm/msm: Improved debugfs gem stats

 drivers/gpu/drm/msm/msm_debugfs.c  | 14 ++
 drivers/gpu/drm/msm/msm_drv.c  |  4 ++
 drivers/gpu/drm/msm/msm_drv.h  | 10 -
 drivers/gpu/drm/msm/msm_fb.c   |  3 +-
 drivers/gpu/drm/msm/msm_gem.c  | 61 +-
 drivers/gpu/drm/msm/msm_gem.h  | 58 +---
 drivers/gpu/drm/msm/msm_gem_shrinker.c | 17 +--
 7 files changed, 122 insertions(+), 45 deletions(-)

-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH 3/4] drm/msm: Fix debugfs deadlock

2021-03-31 Thread Rob Clark
From: Rob Clark 

In normal cases the gem obj lock is acquired first before mm_lock.  The
exception is iterating the various object lists.  In the shrinker path,
deadlock is avoided by using msm_gem_trylock() and skipping over objects
that cannot be locked.  But for debugfs the straightforward thing is to
split things out into a separate list of all objects protected by it's
own lock.

Fixes: d984457b31c4 ("drm/msm: Add priv->mm_lock to protect active/inactive 
lists")
Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_debugfs.c | 14 +++---
 drivers/gpu/drm/msm/msm_drv.c |  3 +++
 drivers/gpu/drm/msm/msm_drv.h |  8 +++-
 drivers/gpu/drm/msm/msm_gem.c | 14 +-
 drivers/gpu/drm/msm/msm_gem.h | 13 ++---
 5 files changed, 36 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_debugfs.c 
b/drivers/gpu/drm/msm/msm_debugfs.c
index 85ad0babc326..d611cc8e54a4 100644
--- a/drivers/gpu/drm/msm/msm_debugfs.c
+++ b/drivers/gpu/drm/msm/msm_debugfs.c
@@ -111,23 +111,15 @@ static const struct file_operations msm_gpu_fops = {
 static int msm_gem_show(struct drm_device *dev, struct seq_file *m)
 {
struct msm_drm_private *priv = dev->dev_private;
-   struct msm_gpu *gpu = priv->gpu;
int ret;
 
-   ret = mutex_lock_interruptible(&priv->mm_lock);
+   ret = mutex_lock_interruptible(&priv->obj_lock);
if (ret)
return ret;
 
-   if (gpu) {
-   seq_printf(m, "Active Objects (%s):\n", gpu->name);
-   msm_gem_describe_objects(&gpu->active_list, m);
-   }
-
-   seq_printf(m, "Inactive Objects:\n");
-   msm_gem_describe_objects(&priv->inactive_dontneed, m);
-   msm_gem_describe_objects(&priv->inactive_willneed, m);
+   msm_gem_describe_objects(&priv->objects, m);
 
-   mutex_unlock(&priv->mm_lock);
+   mutex_unlock(&priv->obj_lock);
 
return 0;
 }
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 3462b0ea14c6..1ef1cd0cc714 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -474,6 +474,9 @@ static int msm_drm_init(struct device *dev, const struct 
drm_driver *drv)
 
priv->wq = alloc_ordered_workqueue("msm", 0);
 
+   INIT_LIST_HEAD(&priv->objects);
+   mutex_init(&priv->obj_lock);
+
INIT_LIST_HEAD(&priv->inactive_willneed);
INIT_LIST_HEAD(&priv->inactive_dontneed);
INIT_LIST_HEAD(&priv->inactive_purged);
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index 3ead5755f695..d69f4263bd4e 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -174,7 +174,13 @@ struct msm_drm_private {
struct msm_rd_state *hangrd;   /* debugfs to dump hanging submits */
struct msm_perf_state *perf;
 
-   /*
+   /**
+* List of all GEM objects (mainly for debugfs, protected by obj_lock
+*/
+   struct list_head objects;
+   struct mutex obj_lock;
+
+   /**
 * Lists of inactive GEM objects.  Every bo is either in one of the
 * inactive lists (depending on whether or not it is shrinkable) or
 * gpu->active_list (for the gpu it is active on[1])
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 74a92eedc992..c184ea68a6d0 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -961,7 +961,7 @@ void msm_gem_describe_objects(struct list_head *list, 
struct seq_file *m)
size_t size = 0;
 
seq_puts(m, "   flags   id ref  offset   kaddrsize 
madv  name\n");
-   list_for_each_entry(msm_obj, list, mm_list) {
+   list_for_each_entry(msm_obj, list, node) {
struct drm_gem_object *obj = &msm_obj->base;
seq_puts(m, "   ");
msm_gem_describe(obj, m);
@@ -980,6 +980,10 @@ void msm_gem_free_object(struct drm_gem_object *obj)
struct drm_device *dev = obj->dev;
struct msm_drm_private *priv = dev->dev_private;
 
+   mutex_lock(&priv->obj_lock);
+   list_del(&msm_obj->node);
+   mutex_unlock(&priv->obj_lock);
+
mutex_lock(&priv->mm_lock);
if (msm_obj->dontneed)
mark_unpurgable(msm_obj);
@@ -1170,6 +1174,10 @@ static struct drm_gem_object *_msm_gem_new(struct 
drm_device *dev,
list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed);
mutex_unlock(&priv->mm_lock);
 
+   mutex_lock(&priv->obj_lock);
+   list_add_tail(&msm_obj->node, &priv->objects);
+   mutex_unlock(&priv->obj_lock);
+
return obj;
 
 fail:
@@ -1240,6 +1248,10 @@ struct drm_gem_object *msm_gem_import(struct drm_device 
*dev,
list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed);
mutex_unlock(&priv->mm_lock);
 
+   mutex_lock(&priv->obj_lock);
+   list_add_tail(&msm_obj->node, &priv->objects);
+   mutex_unlock(&priv->obj_lock);
+

[Freedreno] [PATCH 1/4] drm/msm: Remove unused freed llist node

2021-03-31 Thread Rob Clark
From: Rob Clark 

Unused since c951a9b284b907604759628d273901064c60d09f

Signed-off-by: Rob Clark 
---
 drivers/gpu/drm/msm/msm_gem.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index b3a0a880cbab..7a9107cf1818 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -78,8 +78,6 @@ struct msm_gem_object {
 
struct list_head vmas;/* list of msm_gem_vma */
 
-   struct llist_node freed;
-
/* For physically contiguous buffers.  Used when we don't have
 * an IOMMU.  Also used for stolen/splashscreen buffer.
 */
-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH v4 23/24] drm/msm/dsi: inline msm_dsi_phy_set_src_pll

2021-03-31 Thread abhinavk

On 2021-03-31 03:57, Dmitry Baryshkov wrote:

The src_truthtable config is not used for some of phys, which use other
means of configuring the master/slave usecases. Inline this function
with the goal of removing src_pll_id argument in the next commit.

Signed-off-by: Dmitry Baryshkov 
Tested-by: Stephen Boyd  # on sc7180 lazor

Reviewed-by: Abhinav Kumar 

---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c   | 17 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h   |  8 
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c  |  2 --
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c  | 13 +++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c  | 11 +++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c  | 13 +++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c |  1 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c   |  2 --
 8 files changed, 21 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index 344887025720..93e81bb78d26 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -461,23 +461,6 @@ int msm_dsi_dphy_timing_calc_v4(struct
msm_dsi_dphy_timing *timing,
return 0;
 }

-void msm_dsi_phy_set_src_pll(struct msm_dsi_phy *phy, int pll_id, u32 
reg,

-   u32 bit_mask)
-{
-   int phy_id = phy->id;
-   u32 val;
-
-   if ((phy_id >= DSI_MAX) || (pll_id >= DSI_MAX))
-   return;
-
-   val = dsi_phy_read(phy->base + reg);
-
-   if (phy->cfg->src_pll_truthtable[phy_id][pll_id])
-   dsi_phy_write(phy->base + reg, val | bit_mask);
-   else
-   dsi_phy_write(phy->base + reg, val & (~bit_mask));
-}
-
 static int dsi_phy_regulator_init(struct msm_dsi_phy *phy)
 {
struct regulator_bulk_data *s = phy->supplies;
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index 7748f8b5ea53..00ef01baaebd 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -33,12 +33,6 @@ struct msm_dsi_phy_cfg {
unsigned long   min_pll_rate;
unsigned long   max_pll_rate;

-   /*
-* Each cell {phy_id, pll_id} of the truth table indicates
-* if the source PLL selection bit should be set for each PHY.
-* Fill default H/W values in illegal cells, eg. cell {0, 1}.
-*/
-   bool src_pll_truthtable[DSI_MAX][DSI_MAX];
const resource_size_t io_start[DSI_MAX];
const int num_dsi_phy;
const int quirks;
@@ -121,7 +115,5 @@ int msm_dsi_dphy_timing_calc_v3(struct
msm_dsi_dphy_timing *timing,
struct msm_dsi_phy_clk_request *clk_req);
 int msm_dsi_dphy_timing_calc_v4(struct msm_dsi_dphy_timing *timing,
struct msm_dsi_phy_clk_request *clk_req);
-void msm_dsi_phy_set_src_pll(struct msm_dsi_phy *phy, int pll_id, u32 
reg,

-   u32 bit_mask);

 #endif /* __DSI_PHY_H__ */
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index 655996cf8688..64b8b0efc1a4 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -921,7 +921,6 @@ static void dsi_10nm_phy_disable(struct msm_dsi_phy 
*phy)

 }

 const struct msm_dsi_phy_cfg dsi_phy_10nm_cfgs = {
-   .src_pll_truthtable = { {false, false}, {true, false} },
.has_phy_lane = true,
.reg_cfg = {
.num = 1,
@@ -943,7 +942,6 @@ const struct msm_dsi_phy_cfg dsi_phy_10nm_cfgs = {
 };

 const struct msm_dsi_phy_cfg dsi_phy_10nm_8998_cfgs = {
-   .src_pll_truthtable = { {false, false}, {true, false} },
.has_phy_lane = true,
.reg_cfg = {
.num = 1,
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
index 090d3e7a2212..9a2937589435 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
@@ -947,6 +947,7 @@ static int dsi_14nm_phy_enable(struct msm_dsi_phy
*phy, int src_pll_id,
int ret;
void __iomem *base = phy->base;
void __iomem *lane_base = phy->lane_base;
+   u32 glbl_test_ctrl;

if (msm_dsi_dphy_timing_calc_v2(timing, clk_req)) {
DRM_DEV_ERROR(&phy->pdev->dev,
@@ -994,10 +995,12 @@ static int dsi_14nm_phy_enable(struct
msm_dsi_phy *phy, int src_pll_id,
udelay(100);
dsi_phy_write(base + REG_DSI_14nm_PHY_CMN_CTRL_1, 0x00);

-   msm_dsi_phy_set_src_pll(phy, src_pll_id,
-   REG_DSI_14nm_PHY_CMN_GLBL_TEST_CTRL,
-   DSI_14nm_PHY_CMN_GLBL_TEST_CTRL_BITCLK_HS_SEL);
-
+	glbl_test_ctrl = dsi_phy_read(base + 
REG_DSI_14nm_PHY_CMN_GLBL_TEST_CTRL);

+   if (phy->id == DSI_1 && src_pll_id == DSI_0)
+   glbl_test_ctrl |= DSI_14nm_PHY_C

Re: [Freedreno] [PATCH v4 24/24] drm/msm/dsi: stop passing src_pll_id to the phy_enable call

2021-03-31 Thread abhinavk

On 2021-03-31 03:57, Dmitry Baryshkov wrote:
Phy driver already knows the source PLL id basing on the set usecase 
and

the current PLL id. Stop passing it to the phy_enable call. As a
reminder, dsi manager will always use DSI 0 as a clock master in a 
slave

mode, so PLL 0 is always a clocksource for DSI 0 and it is always a
clocksource for DSI 1 too unless DSI 1 is used in the standalone mode.

Signed-off-by: Dmitry Baryshkov 
Tested-by: Stephen Boyd  # on sc7180 lazor

Reviewed-by: Abhinav Kumar 

---
 drivers/gpu/drm/msm/dsi/dsi.h   |  2 +-
 drivers/gpu/drm/msm/dsi/dsi_manager.c   | 11 +--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c   |  4 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h   |  2 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c  |  2 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c  |  4 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c  |  4 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c  |  4 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c |  2 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c   |  2 +-
 10 files changed, 18 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi.h 
b/drivers/gpu/drm/msm/dsi/dsi.h

index 7f99e12efd52..7abfeab08165 100644
--- a/drivers/gpu/drm/msm/dsi/dsi.h
+++ b/drivers/gpu/drm/msm/dsi/dsi.h
@@ -162,7 +162,7 @@ struct msm_dsi_phy_clk_request {

 void msm_dsi_phy_driver_register(void);
 void msm_dsi_phy_driver_unregister(void);
-int msm_dsi_phy_enable(struct msm_dsi_phy *phy, int src_pll_id,
+int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
struct msm_dsi_phy_clk_request *clk_req);
 void msm_dsi_phy_disable(struct msm_dsi_phy *phy);
 void msm_dsi_phy_get_shared_timings(struct msm_dsi_phy *phy,
diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c
b/drivers/gpu/drm/msm/dsi/dsi_manager.c
index e116e5ff5d24..cd016576e8c5 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
@@ -114,7 +114,7 @@ static int dsi_mgr_setup_components(int id)
return ret;
 }

-static int enable_phy(struct msm_dsi *msm_dsi, int src_pll_id,
+static int enable_phy(struct msm_dsi *msm_dsi,
  struct msm_dsi_phy_shared_timings *shared_timings)
 {
struct msm_dsi_phy_clk_request clk_req;
@@ -123,7 +123,7 @@ static int enable_phy(struct msm_dsi *msm_dsi, int
src_pll_id,

msm_dsi_host_get_phy_clk_req(msm_dsi->host, &clk_req, is_dual_dsi);

-   ret = msm_dsi_phy_enable(msm_dsi->phy, src_pll_id, &clk_req);
+   ret = msm_dsi_phy_enable(msm_dsi->phy, &clk_req);
msm_dsi_phy_get_shared_timings(msm_dsi->phy, shared_timings);

return ret;
@@ -136,7 +136,6 @@ dsi_mgr_phy_enable(int id,
struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
struct msm_dsi *mdsi = dsi_mgr_get_dsi(DSI_CLOCK_MASTER);
struct msm_dsi *sdsi = dsi_mgr_get_dsi(DSI_CLOCK_SLAVE);
-   int src_pll_id = IS_DUAL_DSI() ? DSI_CLOCK_MASTER : id;
int ret;

/* In case of dual DSI, some registers in PHY1 have been programmed
@@ -149,11 +148,11 @@ dsi_mgr_phy_enable(int id,
msm_dsi_host_reset_phy(mdsi->host);
msm_dsi_host_reset_phy(sdsi->host);

-   ret = enable_phy(mdsi, src_pll_id,
+   ret = enable_phy(mdsi,
 &shared_timings[DSI_CLOCK_MASTER]);
if (ret)
return ret;
-   ret = enable_phy(sdsi, src_pll_id,
+   ret = enable_phy(sdsi,
 &shared_timings[DSI_CLOCK_SLAVE]);
if (ret) {
msm_dsi_phy_disable(mdsi->phy);
@@ -162,7 +161,7 @@ dsi_mgr_phy_enable(int id,
}
} else {
msm_dsi_host_reset_phy(msm_dsi->host);
-   ret = enable_phy(msm_dsi, src_pll_id, &shared_timings[id]);
+   ret = enable_phy(msm_dsi, &shared_timings[id]);
if (ret)
return ret;
}
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index 93e81bb78d26..f0a2ddf96a4b 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -753,7 +753,7 @@ void __exit msm_dsi_phy_driver_unregister(void)
platform_driver_unregister(&dsi_phy_platform_driver);
 }

-int msm_dsi_phy_enable(struct msm_dsi_phy *phy, int src_pll_id,
+int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
struct msm_dsi_phy_clk_request *clk_req)
 {
struct device *dev = &phy->pdev->dev;
@@ -776,7 +776,7 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
int src_pll_id,
goto reg_en_fail;
}

-   ret = phy->cfg->ops.enable(phy, src_pll_id, clk_req);
+   ret = phy->cfg->ops.enable(phy, clk_req);
if (ret) {

Re: [Freedreno] [v1] drm/msm/disp/dpu1: icc path needs to be set before dpu runtime resume

2021-03-31 Thread Steev Klimaszewski

On 3/31/21 7:34 AM, kalya...@codeaurora.org wrote:
> On 2021-03-31 00:04, Steev Klimaszewski wrote:
>> On 3/22/21 4:17 AM, Kalyan Thota wrote:
>>> From: Kalyan Thota 
>>>
>>> DPU runtime resume will request for a min vote on the AXI bus as
>>> it is a necessary step before turning ON the AXI clock.
>>>
> Hi Steev,
>
> The WARN_ON is true only for the device with compatible
> "qcom,sc7180-mdss". For other devices its a
> false alarm. Can you please try with the below change ?
>
> https://patchwork.kernel.org/project/linux-arm-msm/patch/1617190020-7931-1-git-send-email-kalya...@codeaurora.org/
>
>
> Thanks,
> Kalyan
>
Hi Kalyan,

Tested here, and it does get rid of the warning.  I'll keep a copy of
the patch locally, since this is going to hit stable too at some point
it seems, at least until another version comes out addressing the other
comments from people way smarter than me.

-- steev

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH v4 17/24] drm/msm/dsi: make save_state/restore_state callbacks accept msm_dsi_phy

2021-03-31 Thread abhinavk

On 2021-03-31 03:57, Dmitry Baryshkov wrote:

Make save_state/restore callbacks accept struct msm_dsi_phy rather than
struct msm_dsi_pll. This moves them to struct msm_dsi_phy_ops, allowing
us to drop struct msm_dsi_pll_ops.

Signed-off-by: Dmitry Baryshkov 
Tested-by: Stephen Boyd  # on sc7180 lazor

Reviewed-by: Abhinav Kumar 

---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c | 12 +++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h | 11 +++---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c| 24 ++---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c| 24 ++---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c| 34 ---
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   | 18 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c | 24 ++---
 7 files changed, 64 insertions(+), 83 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index a1360e2dad3b..2c5ccead3baa 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -858,9 +858,9 @@ int msm_dsi_phy_get_clk_provider(struct msm_dsi_phy 
*phy,


 void msm_dsi_phy_pll_save_state(struct msm_dsi_phy *phy)
 {
-   if (phy->cfg->pll_ops.save_state) {
-   phy->cfg->pll_ops.save_state(phy->pll);
-   phy->pll->state_saved = true;
+   if (phy->cfg->ops.save_pll_state) {
+   phy->cfg->ops.save_pll_state(phy);
+   phy->state_saved = true;
}
 }

@@ -868,12 +868,12 @@ int msm_dsi_phy_pll_restore_state(struct 
msm_dsi_phy *phy)

 {
int ret;

-   if (phy->cfg->pll_ops.restore_state && phy->pll->state_saved) {
-   ret = phy->cfg->pll_ops.restore_state(phy->pll);
+   if (phy->cfg->ops.restore_pll_state && phy->state_saved) {
+   ret = phy->cfg->ops.restore_pll_state(phy);
if (ret)
return ret;

-   phy->pll->state_saved = false;
+   phy->state_saved = false;
}

return 0;
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index b477d21804c8..0b51828c3146 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -17,7 +17,6 @@
 struct msm_dsi_pll {
struct clk_hw   clk_hw;
boolpll_on;
-   boolstate_saved;

const struct msm_dsi_phy_cfg *cfg;
 };
@@ -29,17 +28,13 @@ struct msm_dsi_phy_ops {
int (*enable)(struct msm_dsi_phy *phy, int src_pll_id,
struct msm_dsi_phy_clk_request *clk_req);
void (*disable)(struct msm_dsi_phy *phy);
-};
-
-struct msm_dsi_pll_ops {
-   void (*save_state)(struct msm_dsi_pll *pll);
-   int (*restore_state)(struct msm_dsi_pll *pll);
+   void (*save_pll_state)(struct msm_dsi_phy *phy);
+   int (*restore_pll_state)(struct msm_dsi_phy *phy);
 };

 struct msm_dsi_phy_cfg {
struct dsi_reg_config reg_cfg;
struct msm_dsi_phy_ops ops;
-   const struct msm_dsi_pll_ops pll_ops;

unsigned long   min_pll_rate;
unsigned long   max_pll_rate;
@@ -115,6 +110,8 @@ struct msm_dsi_phy {
struct msm_dsi_pll *pll;

struct clk_hw_onecell_data *provided_clocks;
+
+   bool state_saved;
 };

 /*
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index 91ae0f8dbd88..fefff08f83fd 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -518,9 +518,9 @@ static const struct clk_ops 
clk_ops_dsi_pll_10nm_vco = {

  * PLL Callbacks
  */

-static void dsi_pll_10nm_save_state(struct msm_dsi_pll *pll)
+static void dsi_10nm_pll_save_state(struct msm_dsi_phy *phy)
 {
-   struct dsi_pll_10nm *pll_10nm = to_pll_10nm(pll);
+   struct dsi_pll_10nm *pll_10nm = to_pll_10nm(phy->pll);
struct pll_10nm_cached_state *cached = &pll_10nm->cached_state;
void __iomem *phy_base = pll_10nm->phy_cmn_mmio;
u32 cmn_clk_cfg0, cmn_clk_cfg1;
@@ -541,9 +541,9 @@ static void dsi_pll_10nm_save_state(struct 
msm_dsi_pll *pll)

cached->pix_clk_div, cached->pll_mux);
 }

-static int dsi_pll_10nm_restore_state(struct msm_dsi_pll *pll)
+static int dsi_10nm_pll_restore_state(struct msm_dsi_phy *phy)
 {
-   struct dsi_pll_10nm *pll_10nm = to_pll_10nm(pll);
+   struct dsi_pll_10nm *pll_10nm = to_pll_10nm(phy->pll);
struct pll_10nm_cached_state *cached = &pll_10nm->cached_state;
void __iomem *phy_base = pll_10nm->phy_cmn_mmio;
u32 val;
@@ -562,7 +562,9 @@ static int dsi_pll_10nm_restore_state(struct
msm_dsi_pll *pll)
val |= cached->pll_mux;
pll_write(phy_base + REG_DSI_10nm_PHY_CMN_CLK_CFG1, val);

-   ret = dsi_pll_10nm_vco_set_rate(&pll->clk_hw,
pll_10nm->vco_current_rate, pll_10nm->vco_ref_clk_rate);
+   ret = dsi_pll_10nm_vco_set_rate(&phy->pll->clk_hw,
+  

Re: [Freedreno] [PATCH 16/18] iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE

2021-03-31 Thread Robin Murphy

On 2021-03-16 15:38, Christoph Hellwig wrote:
[...]

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index f1e38526d5bd40..996dfdf9d375dd 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2017,7 +2017,7 @@ static int arm_smmu_domain_finalise(struct iommu_domain 
*domain,
.iommu_dev  = smmu->dev,
};
  
-	if (smmu_domain->non_strict)

+   if (!iommu_get_dma_strict())


As Will raised, this also needs to be checking "domain->type == 
IOMMU_DOMAIN_DMA" to maintain equivalent behaviour to the attribute code 
below.



pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
  
  	pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);

@@ -2449,52 +2449,6 @@ static struct iommu_group *arm_smmu_device_group(struct 
device *dev)
return group;
  }
  
-static int arm_smmu_domain_get_attr(struct iommu_domain *domain,

-   enum iommu_attr attr, void *data)
-{
-   struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
-
-   switch (domain->type) {
-   case IOMMU_DOMAIN_DMA:
-   switch (attr) {
-   case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
-   *(int *)data = smmu_domain->non_strict;
-   return 0;
-   default:
-   return -ENODEV;
-   }
-   break;
-   default:
-   return -EINVAL;
-   }
-}

[...]

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index f985817c967a25..edb1de479dd1a7 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -668,7 +668,6 @@ struct arm_smmu_domain {
struct mutexinit_mutex; /* Protects smmu pointer */
  
  	struct io_pgtable_ops		*pgtbl_ops;

-   boolnon_strict;
atomic_tnr_ats_masters;
  
  	enum arm_smmu_domain_stage	stage;

diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c 
b/drivers/iommu/arm/arm-smmu/arm-smmu.c
index 0aa6d667274970..3dde22b1f8ffb0 100644
--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
+++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
@@ -761,6 +761,9 @@ static int arm_smmu_init_domain_context(struct iommu_domain 
*domain,
.iommu_dev  = smmu->dev,
};
  
+	if (!iommu_get_dma_strict())


Ditto here.

Sorry for not spotting that sooner :(

Robin.


+   pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
+
if (smmu->impl && smmu->impl->init_context) {
ret = smmu->impl->init_context(smmu_domain, &pgtbl_cfg, dev);
if (ret)

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH 16/18] iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE

2021-03-31 Thread Robin Murphy

On 2021-03-31 16:32, Will Deacon wrote:

On Wed, Mar 31, 2021 at 02:09:37PM +0100, Robin Murphy wrote:

On 2021-03-31 12:49, Will Deacon wrote:

On Tue, Mar 30, 2021 at 05:28:19PM +0100, Robin Murphy wrote:

On 2021-03-30 14:58, Will Deacon wrote:

On Tue, Mar 30, 2021 at 02:19:38PM +0100, Robin Murphy wrote:

On 2021-03-30 14:11, Will Deacon wrote:

On Tue, Mar 16, 2021 at 04:38:22PM +0100, Christoph Hellwig wrote:

From: Robin Murphy 

Instead make the global iommu_dma_strict paramete in iommu.c canonical by
exporting helpers to get and set it and use those directly in the drivers.

This make sure that the iommu.strict parameter also works for the AMD and
Intel IOMMU drivers on x86.  As those default to lazy flushing a new
IOMMU_CMD_LINE_STRICT is used to turn the value into a tristate to
represent the default if not overriden by an explicit parameter.

Signed-off-by: Robin Murphy .
[ported on top of the other iommu_attr changes and added a few small
 missing bits]
Signed-off-by: Christoph Hellwig 
---
 drivers/iommu/amd/iommu.c   | 23 +---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 50 +---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h |  1 -
 drivers/iommu/arm/arm-smmu/arm-smmu.c   | 27 +
 drivers/iommu/dma-iommu.c   |  9 +--
 drivers/iommu/intel/iommu.c | 64 -
 drivers/iommu/iommu.c   | 27 ++---
 include/linux/iommu.h   |  4 +-
 8 files changed, 40 insertions(+), 165 deletions(-)


I really like this cleanup, but I can't help wonder if it's going in the
wrong direction. With SoCs often having multiple IOMMU instances and a
distinction between "trusted" and "untrusted" devices, then having the
flush-queue enabled on a per-IOMMU or per-domain basis doesn't sound
unreasonable to me, but this change makes it a global property.


The intent here was just to streamline the existing behaviour of stuffing a
global property into a domain attribute then pulling it out again in the
illusion that it was in any way per-domain. We're still checking
dev_is_untrusted() before making an actual decision, and it's not like we
can't add more factors at that point if we want to.


Like I say, the cleanup is great. I'm just wondering whether there's a
better way to express the complicated logic to decide whether or not to use
the flush queue than what we end up with:

if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
domain->ops->flush_iotlb_all && !iommu_get_dma_strict())

which is mixing up globals, device properties and domain properties. The
result is that the driver code ends up just using the global to determine
whether or not to pass IO_PGTABLE_QUIRK_NON_STRICT to the page-table code,
which is a departure from the current way of doing things.


But previously, SMMU only ever saw the global policy piped through the
domain attribute by iommu_group_alloc_default_domain(), so there's no
functional change there.


For DMA domains sure, but I don't think that's the case for unmanaged
domains such as those used by VFIO.


Eh? This is only relevant to DMA domains anyway. Flush queues are part of
the IOVA allocator that VFIO doesn't even use. It's always been the case
that unmanaged domains only use strict invalidation.


Maybe I'm going mad. With this patch, the SMMU driver unconditionally sets
IO_PGTABLE_QUIRK_NON_STRICT for page-tables if iommu_get_dma_strict() is
true, no? In which case, that will get set for page-tables corresponding
to unmanaged domains as well as DMA domains when it is enabled. That didn't
happen before because you couldn't set the attribute for unmanaged domains.

What am I missing?


Oh cock... sorry, all this time I've been saying what I *expect* it to 
do, while overlooking the fact that the IO_PGTABLE_QUIRK_NON_STRICT 
hunks were the bits I forgot to write and Christoph had to fix up. 
Indeed, those should be checking the domain type too to preserve the 
existing behaviour. Apologies for the confusion.


Robin.


Obviously some of the above checks could be factored out into some kind of
iommu_use_flush_queue() helper that IOMMU drivers can also call if they need
to keep in sync. Or maybe we just allow iommu-dma to set
IO_PGTABLE_QUIRK_NON_STRICT directly via iommu_set_pgtable_quirks() if we're
treating that as a generic thing now.


I think a helper that takes a domain would be a good starting point.


You mean device, right? The one condition we currently have is at the device
level, and there's really nothing inherent to the domain itself that matters
(since the type is implicitly IOMMU_DOMAIN_DMA to even care about this).


Device would probably work too; you'd pass the first device to attach to the
domain when querying this from the SMMU driver, I suppose.

Will


___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.fr

Re: [Freedreno] [v1] drm/msm/disp/dpu1: fix warn stack reported during dpu resume

2021-03-31 Thread Dmitry Baryshkov

On 31/03/2021 14:27, Kalyan Thota wrote:

WARN_ON was introduced by the below commit to catch runtime resumes
that are getting triggered before icc path was set.

"drm/msm/disp/dpu1: icc path needs to be set before dpu runtime resume"

For the targets where the bw scaling is not enabled, this WARN_ON is
a false alarm. Fix the WARN condition appropriately.


Should we change all DPU targets to use bw scaling to the mdp from the 
mdss nodes? The limitation to sc7180 looks artificial.




Reported-by: Steev Klimaszewski 
Signed-off-by: Kalyan Thota 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c  |  8 +---
  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h  |  9 +
  drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c | 11 ++-
  3 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index cab387f..0071a4d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -294,6 +294,9 @@ static int dpu_kms_parse_data_bus_icc_path(struct dpu_kms 
*dpu_kms)
struct icc_path *path1;
struct drm_device *dev = dpu_kms->dev;
  
+	if (!dpu_supports_bw_scaling(dev))

+   return 0;
+
path0 = of_icc_get(dev->dev, "mdp0-mem");
path1 = of_icc_get(dev->dev, "mdp1-mem");
  
@@ -934,8 +937,7 @@ static int dpu_kms_hw_init(struct msm_kms *kms)

DPU_DEBUG("REG_DMA is not defined");
}
  
-	if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))

-   dpu_kms_parse_data_bus_icc_path(dpu_kms);
+   dpu_kms_parse_data_bus_icc_path(dpu_kms);
  
  	pm_runtime_get_sync(&dpu_kms->pdev->dev);
  
@@ -1198,7 +1200,7 @@ static int __maybe_unused dpu_runtime_resume(struct device *dev)
  
  	ddev = dpu_kms->dev;
  
-	WARN_ON(!(dpu_kms->num_paths));

+   WARN_ON((dpu_supports_bw_scaling(ddev) && !dpu_kms->num_paths));
/* Min vote of BW is required before turning on AXI clk */
for (i = 0; i < dpu_kms->num_paths; i++)
icc_set_bw(dpu_kms->path[i], 0, Bps_to_icc(MIN_IB_BW));
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
index d6717d6..f7bcc0a 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
@@ -154,6 +154,15 @@ struct vsync_info {
  
  #define to_dpu_global_state(x) container_of(x, struct dpu_global_state, base)
  
+/**

+ * dpu_supports_bw_scaling: returns true for drivers that support bw scaling.
+ * @dev: Pointer to drm_device structure
+ */
+static inline int dpu_supports_bw_scaling(struct drm_device *dev)
+{
+   return of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss");
+}
+
  /* Global private object state for tracking resources that are shared across
   * multiple kms objects (planes/crtcs/etc).
   */
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
index cd40788..8cd712c 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
@@ -41,6 +41,9 @@ static int dpu_mdss_parse_data_bus_icc_path(struct drm_device 
*dev,
struct icc_path *path0 = of_icc_get(dev->dev, "mdp0-mem");
struct icc_path *path1 = of_icc_get(dev->dev, "mdp1-mem");
  
+	if (dpu_supports_bw_scaling(dev))

+   return 0;
+
if (IS_ERR_OR_NULL(path0))
return PTR_ERR_OR_ZERO(path0);
  
@@ -276,11 +279,9 @@ int dpu_mdss_init(struct drm_device *dev)
  
  	DRM_DEBUG("mapped mdss address space @%pK\n", dpu_mdss->mmio);
  
-	if (!of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss")) {

-   ret = dpu_mdss_parse_data_bus_icc_path(dev, dpu_mdss);
-   if (ret)
-   return ret;
-   }
+   ret = dpu_mdss_parse_data_bus_icc_path(dev, dpu_mdss);
+   if (ret)
+   return ret;
  
  	mp = &dpu_mdss->mp;

ret = msm_dss_parse_clock(pdev, mp);




--
With best wishes
Dmitry
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [v1] drm/msm/disp/dpu1: fix warn stack reported during dpu resume

2021-03-31 Thread Doug Anderson
Hi,

On Wed, Mar 31, 2021 at 4:27 AM Kalyan Thota  wrote:
>
> @@ -294,6 +294,9 @@ static int dpu_kms_parse_data_bus_icc_path(struct dpu_kms 
> *dpu_kms)
> struct icc_path *path1;
> struct drm_device *dev = dpu_kms->dev;
>
> +   if (!dpu_supports_bw_scaling(dev))
> +   return 0;
> +
> path0 = of_icc_get(dev->dev, "mdp0-mem");
> path1 = of_icc_get(dev->dev, "mdp1-mem");
>

Instead of hard coding a check for specific SoC compatible strings,
why not just check to see if path0 and/or path1 are ERR_PTR(-ENODEV)?
Then change dpu_supports_bw_scaling() to just return:

!IS_ERR(dpu_kms->path[0])

It also seems like it would be nice if you did something if you got an
error other than -ENODEV. Right now this function returns it but the
caller ignores it? At least spit an error message out?


> @@ -154,6 +154,15 @@ struct vsync_info {
>
>  #define to_dpu_global_state(x) container_of(x, struct dpu_global_state, base)
>
> +/**
> + * dpu_supports_bw_scaling: returns true for drivers that support bw scaling.
> + * @dev: Pointer to drm_device structure
> + */
> +static inline int dpu_supports_bw_scaling(struct drm_device *dev)
> +{
> +   return of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss");

See above, but I think this would be better as:

  return !IS_ERR(dpu_kms->path[0]);

Specifically, I don't think of_device_is_compatible() is really
designed as something to call a lot. It's doing a whole bunch of data
structure parsing / string comparisons. It's OK-ish during probe
(though better to use the of_match_table), but you don't want to call
it on every runtime suspend / runtime resume.
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH 16/18] iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE

2021-03-31 Thread Will Deacon
On Wed, Mar 31, 2021 at 02:09:37PM +0100, Robin Murphy wrote:
> On 2021-03-31 12:49, Will Deacon wrote:
> > On Tue, Mar 30, 2021 at 05:28:19PM +0100, Robin Murphy wrote:
> > > On 2021-03-30 14:58, Will Deacon wrote:
> > > > On Tue, Mar 30, 2021 at 02:19:38PM +0100, Robin Murphy wrote:
> > > > > On 2021-03-30 14:11, Will Deacon wrote:
> > > > > > On Tue, Mar 16, 2021 at 04:38:22PM +0100, Christoph Hellwig wrote:
> > > > > > > From: Robin Murphy 
> > > > > > > 
> > > > > > > Instead make the global iommu_dma_strict paramete in iommu.c 
> > > > > > > canonical by
> > > > > > > exporting helpers to get and set it and use those directly in the 
> > > > > > > drivers.
> > > > > > > 
> > > > > > > This make sure that the iommu.strict parameter also works for the 
> > > > > > > AMD and
> > > > > > > Intel IOMMU drivers on x86.  As those default to lazy flushing a 
> > > > > > > new
> > > > > > > IOMMU_CMD_LINE_STRICT is used to turn the value into a tristate to
> > > > > > > represent the default if not overriden by an explicit parameter.
> > > > > > > 
> > > > > > > Signed-off-by: Robin Murphy .
> > > > > > > [ported on top of the other iommu_attr changes and added a few 
> > > > > > > small
> > > > > > > missing bits]
> > > > > > > Signed-off-by: Christoph Hellwig 
> > > > > > > ---
> > > > > > > drivers/iommu/amd/iommu.c   | 23 +---
> > > > > > > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 50 
> > > > > > > +---
> > > > > > > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h |  1 -
> > > > > > > drivers/iommu/arm/arm-smmu/arm-smmu.c   | 27 +
> > > > > > > drivers/iommu/dma-iommu.c   |  9 +--
> > > > > > > drivers/iommu/intel/iommu.c | 64 
> > > > > > > -
> > > > > > > drivers/iommu/iommu.c   | 27 ++---
> > > > > > > include/linux/iommu.h   |  4 +-
> > > > > > > 8 files changed, 40 insertions(+), 165 deletions(-)
> > > > > > 
> > > > > > I really like this cleanup, but I can't help wonder if it's going 
> > > > > > in the
> > > > > > wrong direction. With SoCs often having multiple IOMMU instances 
> > > > > > and a
> > > > > > distinction between "trusted" and "untrusted" devices, then having 
> > > > > > the
> > > > > > flush-queue enabled on a per-IOMMU or per-domain basis doesn't sound
> > > > > > unreasonable to me, but this change makes it a global property.
> > > > > 
> > > > > The intent here was just to streamline the existing behaviour of 
> > > > > stuffing a
> > > > > global property into a domain attribute then pulling it out again in 
> > > > > the
> > > > > illusion that it was in any way per-domain. We're still checking
> > > > > dev_is_untrusted() before making an actual decision, and it's not 
> > > > > like we
> > > > > can't add more factors at that point if we want to.
> > > > 
> > > > Like I say, the cleanup is great. I'm just wondering whether there's a
> > > > better way to express the complicated logic to decide whether or not to 
> > > > use
> > > > the flush queue than what we end up with:
> > > > 
> > > > if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
> > > > domain->ops->flush_iotlb_all && !iommu_get_dma_strict())
> > > > 
> > > > which is mixing up globals, device properties and domain properties. The
> > > > result is that the driver code ends up just using the global to 
> > > > determine
> > > > whether or not to pass IO_PGTABLE_QUIRK_NON_STRICT to the page-table 
> > > > code,
> > > > which is a departure from the current way of doing things.
> > > 
> > > But previously, SMMU only ever saw the global policy piped through the
> > > domain attribute by iommu_group_alloc_default_domain(), so there's no
> > > functional change there.
> > 
> > For DMA domains sure, but I don't think that's the case for unmanaged
> > domains such as those used by VFIO.
> 
> Eh? This is only relevant to DMA domains anyway. Flush queues are part of
> the IOVA allocator that VFIO doesn't even use. It's always been the case
> that unmanaged domains only use strict invalidation.

Maybe I'm going mad. With this patch, the SMMU driver unconditionally sets
IO_PGTABLE_QUIRK_NON_STRICT for page-tables if iommu_get_dma_strict() is
true, no? In which case, that will get set for page-tables corresponding
to unmanaged domains as well as DMA domains when it is enabled. That didn't
happen before because you couldn't set the attribute for unmanaged domains.

What am I missing?

> > > Obviously some of the above checks could be factored out into some kind of
> > > iommu_use_flush_queue() helper that IOMMU drivers can also call if they 
> > > need
> > > to keep in sync. Or maybe we just allow iommu-dma to set
> > > IO_PGTABLE_QUIRK_NON_STRICT directly via iommu_set_pgtable_quirks() if 
> > > we're
> > > treating that as a generic thing now.
> > 
> > I think a helper that takes a domain would be a good starting po

Re: [Freedreno] [PATCH] drm/msm: a6xx: fix version check for the A650 SQE microcode

2021-03-31 Thread Rob Clark
fixing Jordan's email so he actually sees this

On Wed, Mar 31, 2021 at 7:02 AM Dmitry Baryshkov
 wrote:
>
> I suppose the microcode version check for a650 is incorrect. It checks
> for the version 1.95, while the firmware released have major version of 0:
> 0.91 (vulnerable), 0.99 (fixing the issue).
>
> Lower version requirements to accept firmware 0.99.
>
> Fixes: 8490f02a3ca4 ("drm/msm: a6xx: Make sure the SQE microcode is safe")
> Cc: Akhil P Oommen 
> Cc: Jordan Crouse 
> Signed-off-by: Dmitry Baryshkov 
> ---
>  drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c 
> b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> index cb2df8736ca8..896b47dc9c85 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> @@ -567,17 +567,17 @@ static bool a6xx_ucode_check_version(struct a6xx_gpu 
> *a6xx_gpu,
> }  else {
> /*
>  * a650 tier targets don't need whereami but still need to be
> -* equal to or newer than 1.95 for other security fixes
> +* equal to or newer than 0.95 for other security fixes
>  */
> if (adreno_is_a650(adreno_gpu)) {
> -   if ((buf[0] & 0xfff) >= 0x195) {
> +   if ((buf[0] & 0xfff) >= 0x095) {
> ret = true;
> goto out;
> }
>
> DRM_DEV_ERROR(&gpu->pdev->dev,
> "a650 SQE ucode is too old. Have version %x 
> need at least %x\n",
> -   buf[0] & 0xfff, 0x195);
> +   buf[0] & 0xfff, 0x095);
> }
>
> /*
> --
> 2.30.2
>
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH] drm/msm: a6xx: fix version check for the A650 SQE microcode

2021-03-31 Thread Dmitry Baryshkov
I suppose the microcode version check for a650 is incorrect. It checks
for the version 1.95, while the firmware released have major version of 0:
0.91 (vulnerable), 0.99 (fixing the issue).

Lower version requirements to accept firmware 0.99.

Fixes: 8490f02a3ca4 ("drm/msm: a6xx: Make sure the SQE microcode is safe")
Cc: Akhil P Oommen 
Cc: Jordan Crouse 
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index cb2df8736ca8..896b47dc9c85 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -567,17 +567,17 @@ static bool a6xx_ucode_check_version(struct a6xx_gpu 
*a6xx_gpu,
}  else {
/*
 * a650 tier targets don't need whereami but still need to be
-* equal to or newer than 1.95 for other security fixes
+* equal to or newer than 0.95 for other security fixes
 */
if (adreno_is_a650(adreno_gpu)) {
-   if ((buf[0] & 0xfff) >= 0x195) {
+   if ((buf[0] & 0xfff) >= 0x095) {
ret = true;
goto out;
}
 
DRM_DEV_ERROR(&gpu->pdev->dev,
"a650 SQE ucode is too old. Have version %x 
need at least %x\n",
-   buf[0] & 0xfff, 0x195);
+   buf[0] & 0xfff, 0x095);
}
 
/*
-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH v2] drm/msm: a6xx: Make sure the SQE microcode is safe

2021-03-31 Thread Dmitry Baryshkov

Hello,

On 10/02/2021 03:52, Jordan Crouse wrote:

Most a6xx targets have security issues that were fixed with new versions
of the microcode(s). Make sure that we are booting with a safe version of
the microcode for the target and print a message and error if not.

v2: Add more informative error messages and fix typos

Signed-off-by: Jordan Crouse 


[skipped]


+   }  else {
+   /*
+* a650 tier targets don't need whereami but still need to be
+* equal to or newer than 1.95 for other security fixes
+*/
+   if (adreno_is_a650(adreno_gpu)) {
+   if ((buf[0] & 0xfff) >= 0x195) {
+   ret = true;
+   goto out;
+   }


I think this is incorrect. The latest firmware i have here also fails 
this check, with the buf[0] = 0x016dd099, so buf[0] & 0xfff = 0x099.


Could you please confirm the versioning?


+
+   DRM_DEV_ERROR(&gpu->pdev->dev,
+   "a650 SQE ucode is too old. Have version %x need at 
least %x\n",
+   buf[0] & 0xfff, 0x195);
+   }
+
+   /*
+* When a660 is added those targets should return true here
+* since those have all the critical security fixes built in
+* from the start
+*/
+   }
+out:
msm_gem_put_vaddr(obj);
+   return ret;
  }
  
  static int a6xx_ucode_init(struct msm_gpu *gpu)

@@ -566,7 +611,13 @@ static int a6xx_ucode_init(struct msm_gpu *gpu)
}
  
  		msm_gem_object_set_name(a6xx_gpu->sqe_bo, "sqefw");

-   a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo);
+   if (!a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo)) {
+   msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace);
+   drm_gem_object_put(a6xx_gpu->sqe_bo);
+
+   a6xx_gpu->sqe_bo = NULL;
+   return -EPERM;
+   }
}
  
  	gpu_write64(gpu, REG_A6XX_CP_SQE_INSTR_BASE_LO,



--
With best wishes
Dmitry
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH] /msm/adreno: fix different address spaces warning

2021-03-31 Thread Bernard Zhao
Fixes the following sparse warnings:
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:189:9:expected void [noderef] 
__iomem *addr
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:189:9:got void *
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:190:9: warning: incorrect type in 
argument 2 (different address spaces)
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:190:9:expected void [noderef] 
__iomem *addr
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:190:9:got void *
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:191:9: warning: incorrect type in 
argument 2 (different address spaces)
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:191:9:expected void [noderef] 
__iomem *addr
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:191:9:got void *
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:192:9: warning: incorrect type in 
argument 2 (different address spaces)
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:192:9:expected void [noderef] 
__iomem *addr
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:192:9:got void *
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:197:19: warning: incorrect type in 
argument 1 (different address spaces)
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:197:19:expected void const 
[noderef] __iomem *addr
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:197:19:got void *
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:198:19: warning: incorrect type in 
argument 1 (different address spaces)
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:198:19:expected void const 
[noderef] __iomem *addr
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:198:19:got void *
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:315:41: warning: incorrect type in 
argument 1 (different address spaces)
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:315:41:expected void *[noderef] 
__iomem cxdbg
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:315:41:got void [noderef] 
__iomem *cxdbg
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:189:9: warning: dereference of 
noderef expression
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:190:9: warning: dereference of 
noderef expression
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:191:9: warning: dereference of 
noderef expression
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:192:9: warning: dereference of 
noderef expression
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:197:19: warning: dereference of 
noderef expression
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:198:19: warning: dereference of 
noderef expression

Signed-off-by: Bernard Zhao 
---
 drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 36 ++---
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
index c1699b4f9a89..e5558d09ddf9 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
@@ -186,16 +186,16 @@ static int cx_debugbus_read(void *__iomem cxdbg, u32 
block, u32 offset,
u32 reg = A6XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_INDEX(offset) |
A6XX_CX_DBGC_CFG_DBGBUS_SEL_A_PING_BLK_SEL(block);
 
-   cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_SEL_A, reg);
-   cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_SEL_B, reg);
-   cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_SEL_C, reg);
-   cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_SEL_D, reg);
+   cxdbg_write(cxdbg, (void __iomem *)REG_A6XX_CX_DBGC_CFG_DBGBUS_SEL_A, 
reg);
+   cxdbg_write(cxdbg, (void __iomem *)REG_A6XX_CX_DBGC_CFG_DBGBUS_SEL_B, 
reg);
+   cxdbg_write(cxdbg, (void __iomem *)REG_A6XX_CX_DBGC_CFG_DBGBUS_SEL_C, 
reg);
+   cxdbg_write(cxdbg, (void __iomem *)REG_A6XX_CX_DBGC_CFG_DBGBUS_SEL_D, 
reg);
 
/* Wait 1 us to make sure the data is flowing */
udelay(1);
 
-   data[0] = cxdbg_read(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_TRACE_BUF2);
-   data[1] = cxdbg_read(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_TRACE_BUF1);
+   data[0] = cxdbg_read(cxdbg, (void __iomem 
*)REG_A6XX_CX_DBGC_CFG_DBGBUS_TRACE_BUF2);
+   data[1] = cxdbg_read(cxdbg, (void __iomem 
*)REG_A6XX_CX_DBGC_CFG_DBGBUS_TRACE_BUF1);
 
return 2;
 }
@@ -353,26 +353,26 @@ static void a6xx_get_debugbus(struct msm_gpu *gpu,
cxdbg = ioremap(res->start, resource_size(res));
 
if (cxdbg) {
-   cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_CNTLT,
+   cxdbg_write(cxdbg, (void __iomem 
*)REG_A6XX_CX_DBGC_CFG_DBGBUS_CNTLT,
A6XX_DBGC_CFG_DBGBUS_CNTLT_SEGT(0xf));
 
-   cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_CNTLM,
+   cxdbg_write(cxdbg, (void __iomem 
*)REG_A6XX_CX_DBGC_CFG_DBGBUS_CNTLM,
A6XX_DBGC_CFG_DBGBUS_CNTLM_ENABLE(0xf));
 
-   cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_IVTL_0, 0);
-   cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_IVTL_1, 0);
-   cxdbg_write(cxdbg, REG_A6XX_CX_DBGC_CFG_DBGBUS_IVTL_2, 0);
-   

[Freedreno] [PATCH] msm/disp: dpu_plane cleanup-coding-style-a-bit

2021-03-31 Thread Bernard Zhao
Fix sparse warning:
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c:1195:41: warning: Using plain integer 
as NULL pointer

Signed-off-by: Bernard Zhao 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index f898a8f67b7f..687a57850405 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -1223,7 +1223,7 @@ static void _dpu_plane_atomic_disable(struct drm_plane 
*plane)
 {
struct dpu_plane *pdpu = to_dpu_plane(plane);
struct drm_plane_state *state = plane->state;
-   struct dpu_plane_state *pstate = to_dpu_plane_state(state);
+   struct dpu_plane_state *pstate = (struct dpu_plane_state 
*)to_dpu_plane_state(state);
 
trace_dpu_plane_disable(DRMID(plane), is_dpu_plane_virtual(plane),
pstate->multirect_mode);
-- 
2.31.0

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH 16/18] iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE

2021-03-31 Thread Robin Murphy

On 2021-03-31 12:49, Will Deacon wrote:

On Tue, Mar 30, 2021 at 05:28:19PM +0100, Robin Murphy wrote:

On 2021-03-30 14:58, Will Deacon wrote:

On Tue, Mar 30, 2021 at 02:19:38PM +0100, Robin Murphy wrote:

On 2021-03-30 14:11, Will Deacon wrote:

On Tue, Mar 16, 2021 at 04:38:22PM +0100, Christoph Hellwig wrote:

From: Robin Murphy 

Instead make the global iommu_dma_strict paramete in iommu.c canonical by
exporting helpers to get and set it and use those directly in the drivers.

This make sure that the iommu.strict parameter also works for the AMD and
Intel IOMMU drivers on x86.  As those default to lazy flushing a new
IOMMU_CMD_LINE_STRICT is used to turn the value into a tristate to
represent the default if not overriden by an explicit parameter.

Signed-off-by: Robin Murphy .
[ported on top of the other iommu_attr changes and added a few small
missing bits]
Signed-off-by: Christoph Hellwig 
---
drivers/iommu/amd/iommu.c   | 23 +---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 50 +---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h |  1 -
drivers/iommu/arm/arm-smmu/arm-smmu.c   | 27 +
drivers/iommu/dma-iommu.c   |  9 +--
drivers/iommu/intel/iommu.c | 64 -
drivers/iommu/iommu.c   | 27 ++---
include/linux/iommu.h   |  4 +-
8 files changed, 40 insertions(+), 165 deletions(-)


I really like this cleanup, but I can't help wonder if it's going in the
wrong direction. With SoCs often having multiple IOMMU instances and a
distinction between "trusted" and "untrusted" devices, then having the
flush-queue enabled on a per-IOMMU or per-domain basis doesn't sound
unreasonable to me, but this change makes it a global property.


The intent here was just to streamline the existing behaviour of stuffing a
global property into a domain attribute then pulling it out again in the
illusion that it was in any way per-domain. We're still checking
dev_is_untrusted() before making an actual decision, and it's not like we
can't add more factors at that point if we want to.


Like I say, the cleanup is great. I'm just wondering whether there's a
better way to express the complicated logic to decide whether or not to use
the flush queue than what we end up with:

if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
domain->ops->flush_iotlb_all && !iommu_get_dma_strict())

which is mixing up globals, device properties and domain properties. The
result is that the driver code ends up just using the global to determine
whether or not to pass IO_PGTABLE_QUIRK_NON_STRICT to the page-table code,
which is a departure from the current way of doing things.


But previously, SMMU only ever saw the global policy piped through the
domain attribute by iommu_group_alloc_default_domain(), so there's no
functional change there.


For DMA domains sure, but I don't think that's the case for unmanaged
domains such as those used by VFIO.


Eh? This is only relevant to DMA domains anyway. Flush queues are part 
of the IOVA allocator that VFIO doesn't even use. It's always been the 
case that unmanaged domains only use strict invalidation.



Obviously some of the above checks could be factored out into some kind of
iommu_use_flush_queue() helper that IOMMU drivers can also call if they need
to keep in sync. Or maybe we just allow iommu-dma to set
IO_PGTABLE_QUIRK_NON_STRICT directly via iommu_set_pgtable_quirks() if we're
treating that as a generic thing now.


I think a helper that takes a domain would be a good starting point.


You mean device, right? The one condition we currently have is at the 
device level, and there's really nothing inherent to the domain itself 
that matters (since the type is implicitly IOMMU_DOMAIN_DMA to even care 
about this).


Another idea that's just come to mind is now that IOMMU_DOMAIN_DMA has a 
standard meaning, maybe we could split out a separate 
IOMMU_DOMAIN_DMA_STRICT type such that it can all propagate from 
iommu_get_def_domain_type()? That feels like it might be quite 
promising, but I'd still do it as an improvement on top of this patch, 
since it's beyond just cleaning up the abuse of domain attributes to 
pass a command-line option around.


Robin.
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [v1] drm/msm/disp/dpu1: icc path needs to be set before dpu runtime resume

2021-03-31 Thread kalyan_t

On 2021-03-31 00:04, Steev Klimaszewski wrote:

On 3/22/21 4:17 AM, Kalyan Thota wrote:

From: Kalyan Thota 

DPU runtime resume will request for a min vote on the AXI bus as
it is a necessary step before turning ON the AXI clock.

The change does below
1) Move the icc path set before requesting runtime get_sync.
2) remove the dependency of hw catalog for min ib vote
as it is initialized at a later point.

Signed-off-by: Kalyan Thota 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c

index ed636f1..cab387f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -44,6 +44,8 @@
 #define DPU_DEBUGFS_DIR "msm_dpu"
 #define DPU_DEBUGFS_HWMASKNAME "hw_log_mask"

+#define MIN_IB_BW  4ULL /* Min ib vote 400MB */
+
 static int dpu_kms_hw_init(struct msm_kms *kms);
 static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms);

@@ -932,6 +934,9 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
DPU_DEBUG("REG_DMA is not defined");
}

+   if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))
+   dpu_kms_parse_data_bus_icc_path(dpu_kms);
+
pm_runtime_get_sync(&dpu_kms->pdev->dev);

dpu_kms->core_rev = readl_relaxed(dpu_kms->mmio + 0x0);
@@ -1037,9 +1042,6 @@ static int dpu_kms_hw_init(struct msm_kms *kms)

dpu_vbif_init_memtypes(dpu_kms);

-   if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))
-   dpu_kms_parse_data_bus_icc_path(dpu_kms);
-
pm_runtime_put_sync(&dpu_kms->pdev->dev);

return 0;
@@ -1196,10 +1198,10 @@ static int __maybe_unused 
dpu_runtime_resume(struct device *dev)


ddev = dpu_kms->dev;

+   WARN_ON(!(dpu_kms->num_paths));
/* Min vote of BW is required before turning on AXI clk */
for (i = 0; i < dpu_kms->num_paths; i++)
-   icc_set_bw(dpu_kms->path[i], 0,
-   dpu_kms->catalog->perf.min_dram_ib);
+   icc_set_bw(dpu_kms->path[i], 0, Bps_to_icc(MIN_IB_BW));

rc = msm_dss_enable_clk(mp->clk_config, mp->num_clk, true);
if (rc) {


With this patch now applied to 5.12-rc5, I am seeing the following when
booting the Lenovo Yoga C630 -

Mar 30 13:16:03 c630 kernel: [2.038491] [ cut here 
]

Mar 30 13:16:03 c630 kernel: [2.038495] WARNING: CPU: 3 PID: 125
at drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c:1196
dpu_runtime_resume+0xc0/0xf0 [msm]
Mar 30 13:16:03 c630 kernel: [2.038551] Modules linked in:
ti_sn65dsi86 i2c_hid_of crct10dif_ce msm rtc_pm8xxx llcc_qcom ocmem
drm_kms_helper i2c_qcom_geni phy_qcom_qusb2 ipa(+) qcom_common
qcom_glink_smem qmi_helpers mdt_loader panel_simple drm pwm_bl
Mar 30 13:16:03 c630 kernel: [2.038599] CPU: 3 PID: 125 Comm:
kworker/3:1 Not tainted 5.12.0-rc5 #1
Mar 30 13:16:03 c630 kernel: [2.038605] Hardware name: LENOVO
81JL/LNVNB161216, BIOS 9UCN33WW(V2.06) 06/ 4/2019
Mar 30 13:16:03 c630 kernel: [2.038610] Workqueue: events
deferred_probe_work_func
Mar 30 13:16:03 c630 kernel: [2.038621] pstate: 6045 (nZCv
daif +PAN -UAO -TCO BTYPE=--)
Mar 30 13:16:03 c630 kernel: [2.038627] pc :
dpu_runtime_resume+0xc0/0xf0 [msm]
Mar 30 13:16:03 c630 kernel: [2.038674] lr :
pm_generic_runtime_resume+0x30/0x50
Mar 30 13:16:03 c630 kernel: [2.038683] sp : 800010b9b7e0
Mar 30 13:16:03 c630 kernel: [2.038685] x29: 800010b9b7e0 x28:

Mar 30 13:16:03 c630 kernel: [2.038692] x27:  x26:
6b42c0c16cf4
Mar 30 13:16:03 c630 kernel: [2.038698] x25: 7965f7df x24:
0001
Mar 30 13:16:03 c630 kernel: [2.038705] x23: 6b42c0a34180 x22:
da2e0cc5b3d0
Mar 30 13:16:03 c630 kernel: [2.038712] x21: da2e0b3ed6a0 x20:
6b42c6845000
Mar 30 13:16:03 c630 kernel: [2.038718] x19: 6b42c6851080 x18:
da2e0cce1220
Mar 30 13:16:03 c630 kernel: [2.038725] x17: da2e0cce1238 x16:
da2e0b23e5f0
Mar 30 13:16:03 c630 kernel: [2.038731] x15: 4000 x14:

Mar 30 13:16:03 c630 kernel: [2.038738] x13: 6b42c5f0b5b0 x12:

Mar 30 13:16:03 c630 kernel: [2.038744] x11: 0001 x10:
3fff
Mar 30 13:16:03 c630 kernel: [2.038750] x9 :  x8 :

Mar 30 13:16:03 c630 kernel: [2.038755] x7 :  x6 :
0c473b7e
Mar 30 13:16:03 c630 kernel: [2.038761] x5 : 00ff x4 :
00221806fff8f800
Mar 30 13:16:03 c630 kernel: [2.038768] x3 : 0018 x2 :
da2dc3d34320
Mar 30 13:16:03 c630 kernel: [2.038774] x1 :  x0 :

Mar 30 13:16:03 c630 kernel: [2.038781] Call trace:
Mar 30 13:16:03 c630 kernel: [2.038784]  
dpu_runtime_resume+0xc0/0xf0 [msm]
Mar 30 13:16

Re: [Freedreno] [PATCH 16/18] iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE

2021-03-31 Thread Will Deacon
On Tue, Mar 30, 2021 at 05:28:19PM +0100, Robin Murphy wrote:
> On 2021-03-30 14:58, Will Deacon wrote:
> > On Tue, Mar 30, 2021 at 02:19:38PM +0100, Robin Murphy wrote:
> > > On 2021-03-30 14:11, Will Deacon wrote:
> > > > On Tue, Mar 16, 2021 at 04:38:22PM +0100, Christoph Hellwig wrote:
> > > > > From: Robin Murphy 
> > > > > 
> > > > > Instead make the global iommu_dma_strict paramete in iommu.c 
> > > > > canonical by
> > > > > exporting helpers to get and set it and use those directly in the 
> > > > > drivers.
> > > > > 
> > > > > This make sure that the iommu.strict parameter also works for the AMD 
> > > > > and
> > > > > Intel IOMMU drivers on x86.  As those default to lazy flushing a new
> > > > > IOMMU_CMD_LINE_STRICT is used to turn the value into a tristate to
> > > > > represent the default if not overriden by an explicit parameter.
> > > > > 
> > > > > Signed-off-by: Robin Murphy .
> > > > > [ported on top of the other iommu_attr changes and added a few small
> > > > >missing bits]
> > > > > Signed-off-by: Christoph Hellwig 
> > > > > ---
> > > > >drivers/iommu/amd/iommu.c   | 23 +---
> > > > >drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 50 +---
> > > > >drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h |  1 -
> > > > >drivers/iommu/arm/arm-smmu/arm-smmu.c   | 27 +
> > > > >drivers/iommu/dma-iommu.c   |  9 +--
> > > > >drivers/iommu/intel/iommu.c | 64 
> > > > > -
> > > > >drivers/iommu/iommu.c   | 27 ++---
> > > > >include/linux/iommu.h   |  4 +-
> > > > >8 files changed, 40 insertions(+), 165 deletions(-)
> > > > 
> > > > I really like this cleanup, but I can't help wonder if it's going in the
> > > > wrong direction. With SoCs often having multiple IOMMU instances and a
> > > > distinction between "trusted" and "untrusted" devices, then having the
> > > > flush-queue enabled on a per-IOMMU or per-domain basis doesn't sound
> > > > unreasonable to me, but this change makes it a global property.
> > > 
> > > The intent here was just to streamline the existing behaviour of stuffing 
> > > a
> > > global property into a domain attribute then pulling it out again in the
> > > illusion that it was in any way per-domain. We're still checking
> > > dev_is_untrusted() before making an actual decision, and it's not like we
> > > can't add more factors at that point if we want to.
> > 
> > Like I say, the cleanup is great. I'm just wondering whether there's a
> > better way to express the complicated logic to decide whether or not to use
> > the flush queue than what we end up with:
> > 
> > if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
> > domain->ops->flush_iotlb_all && !iommu_get_dma_strict())
> > 
> > which is mixing up globals, device properties and domain properties. The
> > result is that the driver code ends up just using the global to determine
> > whether or not to pass IO_PGTABLE_QUIRK_NON_STRICT to the page-table code,
> > which is a departure from the current way of doing things.
> 
> But previously, SMMU only ever saw the global policy piped through the
> domain attribute by iommu_group_alloc_default_domain(), so there's no
> functional change there.

For DMA domains sure, but I don't think that's the case for unmanaged
domains such as those used by VFIO.

> Obviously some of the above checks could be factored out into some kind of
> iommu_use_flush_queue() helper that IOMMU drivers can also call if they need
> to keep in sync. Or maybe we just allow iommu-dma to set
> IO_PGTABLE_QUIRK_NON_STRICT directly via iommu_set_pgtable_quirks() if we're
> treating that as a generic thing now.

I think a helper that takes a domain would be a good starting point.

Will
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [v1] drm/msm/disp/dpu1: fix warn stack reported during dpu resume

2021-03-31 Thread Kalyan Thota
WARN_ON was introduced by the below commit to catch runtime resumes
that are getting triggered before icc path was set.

"drm/msm/disp/dpu1: icc path needs to be set before dpu runtime resume"

For the targets where the bw scaling is not enabled, this WARN_ON is
a false alarm. Fix the WARN condition appropriately.

Reported-by: Steev Klimaszewski 
Signed-off-by: Kalyan Thota 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c  |  8 +---
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h  |  9 +
 drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c | 11 ++-
 3 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index cab387f..0071a4d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -294,6 +294,9 @@ static int dpu_kms_parse_data_bus_icc_path(struct dpu_kms 
*dpu_kms)
struct icc_path *path1;
struct drm_device *dev = dpu_kms->dev;
 
+   if (!dpu_supports_bw_scaling(dev))
+   return 0;
+
path0 = of_icc_get(dev->dev, "mdp0-mem");
path1 = of_icc_get(dev->dev, "mdp1-mem");
 
@@ -934,8 +937,7 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
DPU_DEBUG("REG_DMA is not defined");
}
 
-   if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))
-   dpu_kms_parse_data_bus_icc_path(dpu_kms);
+   dpu_kms_parse_data_bus_icc_path(dpu_kms);
 
pm_runtime_get_sync(&dpu_kms->pdev->dev);
 
@@ -1198,7 +1200,7 @@ static int __maybe_unused dpu_runtime_resume(struct 
device *dev)
 
ddev = dpu_kms->dev;
 
-   WARN_ON(!(dpu_kms->num_paths));
+   WARN_ON((dpu_supports_bw_scaling(ddev) && !dpu_kms->num_paths));
/* Min vote of BW is required before turning on AXI clk */
for (i = 0; i < dpu_kms->num_paths; i++)
icc_set_bw(dpu_kms->path[i], 0, Bps_to_icc(MIN_IB_BW));
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
index d6717d6..f7bcc0a 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
@@ -154,6 +154,15 @@ struct vsync_info {
 
 #define to_dpu_global_state(x) container_of(x, struct dpu_global_state, base)
 
+/**
+ * dpu_supports_bw_scaling: returns true for drivers that support bw scaling.
+ * @dev: Pointer to drm_device structure
+ */
+static inline int dpu_supports_bw_scaling(struct drm_device *dev)
+{
+   return of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss");
+}
+
 /* Global private object state for tracking resources that are shared across
  * multiple kms objects (planes/crtcs/etc).
  */
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
index cd40788..8cd712c 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
@@ -41,6 +41,9 @@ static int dpu_mdss_parse_data_bus_icc_path(struct drm_device 
*dev,
struct icc_path *path0 = of_icc_get(dev->dev, "mdp0-mem");
struct icc_path *path1 = of_icc_get(dev->dev, "mdp1-mem");
 
+   if (dpu_supports_bw_scaling(dev))
+   return 0;
+
if (IS_ERR_OR_NULL(path0))
return PTR_ERR_OR_ZERO(path0);
 
@@ -276,11 +279,9 @@ int dpu_mdss_init(struct drm_device *dev)
 
DRM_DEBUG("mapped mdss address space @%pK\n", dpu_mdss->mmio);
 
-   if (!of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss")) {
-   ret = dpu_mdss_parse_data_bus_icc_path(dev, dpu_mdss);
-   if (ret)
-   return ret;
-   }
+   ret = dpu_mdss_parse_data_bus_icc_path(dev, dpu_mdss);
+   if (ret)
+   return ret;
 
mp = &dpu_mdss->mp;
ret = msm_dss_parse_clock(pdev, mp);
-- 
2.7.4

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH v4 24/24] drm/msm/dsi: stop passing src_pll_id to the phy_enable call

2021-03-31 Thread Dmitry Baryshkov
Phy driver already knows the source PLL id basing on the set usecase and
the current PLL id. Stop passing it to the phy_enable call. As a
reminder, dsi manager will always use DSI 0 as a clock master in a slave
mode, so PLL 0 is always a clocksource for DSI 0 and it is always a
clocksource for DSI 1 too unless DSI 1 is used in the standalone mode.

Signed-off-by: Dmitry Baryshkov 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/dsi.h   |  2 +-
 drivers/gpu/drm/msm/dsi/dsi_manager.c   | 11 +--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c   |  4 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h   |  2 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c  |  2 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c  |  4 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c  |  4 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c  |  4 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c |  2 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c   |  2 +-
 10 files changed, 18 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi.h b/drivers/gpu/drm/msm/dsi/dsi.h
index 7f99e12efd52..7abfeab08165 100644
--- a/drivers/gpu/drm/msm/dsi/dsi.h
+++ b/drivers/gpu/drm/msm/dsi/dsi.h
@@ -162,7 +162,7 @@ struct msm_dsi_phy_clk_request {
 
 void msm_dsi_phy_driver_register(void);
 void msm_dsi_phy_driver_unregister(void);
-int msm_dsi_phy_enable(struct msm_dsi_phy *phy, int src_pll_id,
+int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
struct msm_dsi_phy_clk_request *clk_req);
 void msm_dsi_phy_disable(struct msm_dsi_phy *phy);
 void msm_dsi_phy_get_shared_timings(struct msm_dsi_phy *phy,
diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c 
b/drivers/gpu/drm/msm/dsi/dsi_manager.c
index e116e5ff5d24..cd016576e8c5 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
@@ -114,7 +114,7 @@ static int dsi_mgr_setup_components(int id)
return ret;
 }
 
-static int enable_phy(struct msm_dsi *msm_dsi, int src_pll_id,
+static int enable_phy(struct msm_dsi *msm_dsi,
  struct msm_dsi_phy_shared_timings *shared_timings)
 {
struct msm_dsi_phy_clk_request clk_req;
@@ -123,7 +123,7 @@ static int enable_phy(struct msm_dsi *msm_dsi, int 
src_pll_id,
 
msm_dsi_host_get_phy_clk_req(msm_dsi->host, &clk_req, is_dual_dsi);
 
-   ret = msm_dsi_phy_enable(msm_dsi->phy, src_pll_id, &clk_req);
+   ret = msm_dsi_phy_enable(msm_dsi->phy, &clk_req);
msm_dsi_phy_get_shared_timings(msm_dsi->phy, shared_timings);
 
return ret;
@@ -136,7 +136,6 @@ dsi_mgr_phy_enable(int id,
struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
struct msm_dsi *mdsi = dsi_mgr_get_dsi(DSI_CLOCK_MASTER);
struct msm_dsi *sdsi = dsi_mgr_get_dsi(DSI_CLOCK_SLAVE);
-   int src_pll_id = IS_DUAL_DSI() ? DSI_CLOCK_MASTER : id;
int ret;
 
/* In case of dual DSI, some registers in PHY1 have been programmed
@@ -149,11 +148,11 @@ dsi_mgr_phy_enable(int id,
msm_dsi_host_reset_phy(mdsi->host);
msm_dsi_host_reset_phy(sdsi->host);
 
-   ret = enable_phy(mdsi, src_pll_id,
+   ret = enable_phy(mdsi,
 &shared_timings[DSI_CLOCK_MASTER]);
if (ret)
return ret;
-   ret = enable_phy(sdsi, src_pll_id,
+   ret = enable_phy(sdsi,
 &shared_timings[DSI_CLOCK_SLAVE]);
if (ret) {
msm_dsi_phy_disable(mdsi->phy);
@@ -162,7 +161,7 @@ dsi_mgr_phy_enable(int id,
}
} else {
msm_dsi_host_reset_phy(msm_dsi->host);
-   ret = enable_phy(msm_dsi, src_pll_id, &shared_timings[id]);
+   ret = enable_phy(msm_dsi, &shared_timings[id]);
if (ret)
return ret;
}
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index 93e81bb78d26..f0a2ddf96a4b 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -753,7 +753,7 @@ void __exit msm_dsi_phy_driver_unregister(void)
platform_driver_unregister(&dsi_phy_platform_driver);
 }
 
-int msm_dsi_phy_enable(struct msm_dsi_phy *phy, int src_pll_id,
+int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
struct msm_dsi_phy_clk_request *clk_req)
 {
struct device *dev = &phy->pdev->dev;
@@ -776,7 +776,7 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy, int 
src_pll_id,
goto reg_en_fail;
}
 
-   ret = phy->cfg->ops.enable(phy, src_pll_id, clk_req);
+   ret = phy->cfg->ops.enable(phy, clk_req);
if (ret) {
DRM_DEV_ERROR(dev, "%s: phy enable failed, %d\n", __func__, 
ret)

[Freedreno] [PATCH v4 18/24] drm/msm/dsi: drop msm_dsi_pll abstraction

2021-03-31 Thread Dmitry Baryshkov
Drop the struct msm_dsi_pll abstraction, by including vco's clk_hw
directly into struct msm_dsi_phy.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/Kconfig   |  8 --
 drivers/gpu/drm/msm/Makefile  |  2 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h | 36 +---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c| 66 ---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c| 78 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c| 83 ++-
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   | 65 ---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c | 74 +
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.c | 23 -
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.h | 44 --
 10 files changed, 221 insertions(+), 258 deletions(-)
 delete mode 100644 drivers/gpu/drm/msm/dsi/phy/dsi_pll.c
 delete mode 100644 drivers/gpu/drm/msm/dsi/phy/dsi_pll.h

diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
index dabb4a1ccdcf..1f0b3f0e7149 100644
--- a/drivers/gpu/drm/msm/Kconfig
+++ b/drivers/gpu/drm/msm/Kconfig
@@ -76,14 +76,6 @@ config DRM_MSM_DSI
  Choose this option if you have a need for MIPI DSI connector
  support.
 
-config DRM_MSM_DSI_PLL
-   bool "Enable DSI PLL driver in MSM DRM"
-   depends on DRM_MSM_DSI && COMMON_CLK
-   default y
-   help
- Choose this option to enable DSI PLL driver which provides DSI
- source clocks under common clock framework.
-
 config DRM_MSM_DSI_28NM_PHY
bool "Enable DSI 28nm PHY driver in MSM DRM"
depends on DRM_MSM_DSI
diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
index 1be6996b80b7..610d630326bb 100644
--- a/drivers/gpu/drm/msm/Makefile
+++ b/drivers/gpu/drm/msm/Makefile
@@ -136,6 +136,4 @@ msm-$(CONFIG_DRM_MSM_DSI_14NM_PHY) += dsi/phy/dsi_phy_14nm.o
 msm-$(CONFIG_DRM_MSM_DSI_10NM_PHY) += dsi/phy/dsi_phy_10nm.o
 msm-$(CONFIG_DRM_MSM_DSI_7NM_PHY) += dsi/phy/dsi_phy_7nm.o
 
-msm-$(CONFIG_DRM_MSM_DSI_PLL) += dsi/phy/dsi_pll.o
-
 obj-$(CONFIG_DRM_MSM)  += msm.o
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index 0b51828c3146..e80560f38d80 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -7,6 +7,7 @@
 #define __DSI_PHY_H__
 
 #include 
+#include 
 #include 
 
 #include "dsi.h"
@@ -14,15 +15,6 @@
 #define dsi_phy_read(offset) msm_readl((offset))
 #define dsi_phy_write(offset, data) msm_writel((data), (offset))
 
-struct msm_dsi_pll {
-   struct clk_hw   clk_hw;
-   boolpll_on;
-
-   const struct msm_dsi_phy_cfg *cfg;
-};
-
-#define hw_clk_to_pll(x) container_of(x, struct msm_dsi_pll, clk_hw)
-
 struct msm_dsi_phy_ops {
int (*pll_init)(struct msm_dsi_phy *phy);
int (*enable)(struct msm_dsi_phy *phy, int src_pll_id,
@@ -107,7 +99,8 @@ struct msm_dsi_phy {
enum msm_dsi_phy_usecase usecase;
bool regulator_ldo_mode;
 
-   struct msm_dsi_pll *pll;
+   struct clk_hw *vco_hw;
+   bool pll_on;
 
struct clk_hw_onecell_data *provided_clocks;
 
@@ -127,6 +120,27 @@ int msm_dsi_dphy_timing_calc_v4(struct msm_dsi_dphy_timing 
*timing,
struct msm_dsi_phy_clk_request *clk_req);
 void msm_dsi_phy_set_src_pll(struct msm_dsi_phy *phy, int pll_id, u32 reg,
u32 bit_mask);
+/* PLL accessors */
+static inline void pll_write(void __iomem *reg, u32 data)
+{
+   msm_writel(data, reg);
+}
+
+static inline u32 pll_read(const void __iomem *reg)
+{
+   return msm_readl(reg);
+}
+
+static inline void pll_write_udelay(void __iomem *reg, u32 data, u32 delay_us)
+{
+   pll_write(reg, data);
+   udelay(delay_us);
+}
+
+static inline void pll_write_ndelay(void __iomem *reg, u32 data, u32 delay_ns)
+{
+   pll_write((reg), data);
+   ndelay(delay_ns);
+}
 
 #endif /* __DSI_PHY_H__ */
-
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index fefff08f83fd..cbf3d64d5efb 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -7,7 +7,6 @@
 #include 
 #include 
 
-#include "dsi_pll.h"
 #include "dsi_phy.h"
 #include "dsi.xml.h"
 
@@ -85,11 +84,13 @@ struct pll_10nm_cached_state {
 };
 
 struct dsi_pll_10nm {
-   struct msm_dsi_pll base;
+   struct clk_hw clk_hw;
 
int id;
struct platform_device *pdev;
 
+   struct msm_dsi_phy *phy;
+
void __iomem *phy_cmn_mmio;
void __iomem *mmio;
 
@@ -104,11 +105,10 @@ struct dsi_pll_10nm {
 
struct pll_10nm_cached_state cached_state;
 
-   enum msm_dsi_phy_usecase uc;
struct dsi_pll_10nm *slave;
 };
 
-#define to_pll_10nm(x) container_of(x, struct dsi_pll_10nm, base)

[Freedreno] [PATCH v4 21/24] drm/msm/dsi: remove duplicate fields from dsi_pll_Nnm instances

2021-03-31 Thread Dmitry Baryshkov
Drop duplicate fields pdev and id from dsi_pll_Nnm instances. Reuse
those fields from the provided msm_dsi_phy.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c| 72 +--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c| 54 +++---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c| 48 ++---
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   | 26 +++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c | 68 --
 5 files changed, 119 insertions(+), 149 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index ef92c3f38a9a..34c1c216e738 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -86,9 +86,6 @@ struct pll_10nm_cached_state {
 struct dsi_pll_10nm {
struct clk_hw clk_hw;
 
-   int id;
-   struct platform_device *pdev;
-
struct msm_dsi_phy *phy;
 
u64 vco_ref_clk_rate;
@@ -301,7 +298,7 @@ static int dsi_pll_10nm_vco_set_rate(struct clk_hw *hw, 
unsigned long rate,
 {
struct dsi_pll_10nm *pll_10nm = to_pll_10nm(hw);
 
-   DBG("DSI PLL%d rate=%lu, parent's=%lu", pll_10nm->id, rate,
+   DBG("DSI PLL%d rate=%lu, parent's=%lu", pll_10nm->phy->id, rate,
parent_rate);
 
pll_10nm->vco_current_rate = rate;
@@ -327,7 +324,7 @@ static int dsi_pll_10nm_vco_set_rate(struct clk_hw *hw, 
unsigned long rate,
 
 static int dsi_pll_10nm_lock_status(struct dsi_pll_10nm *pll)
 {
-   struct device *dev = &pll->pdev->dev;
+   struct device *dev = &pll->phy->pdev->dev;
int rc;
u32 status = 0;
u32 const delay_us = 100;
@@ -341,7 +338,7 @@ static int dsi_pll_10nm_lock_status(struct dsi_pll_10nm 
*pll)
   timeout_us);
if (rc)
DRM_DEV_ERROR(dev, "DSI PLL(%d) lock failed, status=0x%08x\n",
- pll->id, status);
+ pll->phy->id, status);
 
return rc;
 }
@@ -387,7 +384,7 @@ static void dsi_pll_enable_global_clk(struct dsi_pll_10nm 
*pll)
 static int dsi_pll_10nm_vco_prepare(struct clk_hw *hw)
 {
struct dsi_pll_10nm *pll_10nm = to_pll_10nm(hw);
-   struct device *dev = &pll_10nm->pdev->dev;
+   struct device *dev = &pll_10nm->phy->pdev->dev;
int rc;
 
dsi_pll_enable_pll_bias(pll_10nm);
@@ -413,7 +410,7 @@ static int dsi_pll_10nm_vco_prepare(struct clk_hw *hw)
/* Check for PLL lock */
rc = dsi_pll_10nm_lock_status(pll_10nm);
if (rc) {
-   DRM_DEV_ERROR(dev, "PLL(%d) lock failed\n", pll_10nm->id);
+   DRM_DEV_ERROR(dev, "PLL(%d) lock failed\n", pll_10nm->phy->id);
goto error;
}
 
@@ -494,7 +491,7 @@ static unsigned long dsi_pll_10nm_vco_recalc_rate(struct 
clk_hw *hw,
vco_rate = pll_freq;
 
DBG("DSI PLL%d returning vco rate = %lu, dec = %x, frac = %x",
-   pll_10nm->id, (unsigned long)vco_rate, dec, frac);
+   pll_10nm->phy->id, (unsigned long)vco_rate, dec, frac);
 
return (unsigned long)vco_rate;
 }
@@ -543,7 +540,7 @@ static void dsi_10nm_pll_save_state(struct msm_dsi_phy *phy)
cached->pll_mux = cmn_clk_cfg1 & 0x3;
 
DBG("DSI PLL%d outdiv %x bit_clk_div %x pix_clk_div %x pll_mux %x",
-   pll_10nm->id, cached->pll_out_div, cached->bit_clk_div,
+   pll_10nm->phy->id, cached->pll_out_div, cached->bit_clk_div,
cached->pix_clk_div, cached->pll_mux);
 }
 
@@ -572,12 +569,12 @@ static int dsi_10nm_pll_restore_state(struct msm_dsi_phy 
*phy)
pll_10nm->vco_current_rate,
pll_10nm->vco_ref_clk_rate);
if (ret) {
-   DRM_DEV_ERROR(&pll_10nm->pdev->dev,
+   DRM_DEV_ERROR(&pll_10nm->phy->pdev->dev,
"restore vco rate failed. ret=%d\n", ret);
return ret;
}
 
-   DBG("DSI PLL%d", pll_10nm->id);
+   DBG("DSI PLL%d", pll_10nm->phy->id);
 
return 0;
 }
@@ -588,13 +585,13 @@ static int dsi_10nm_set_usecase(struct msm_dsi_phy *phy)
void __iomem *base = phy->base;
u32 data = 0x0; /* internal PLL */
 
-   DBG("DSI PLL%d", pll_10nm->id);
+   DBG("DSI PLL%d", pll_10nm->phy->id);
 
switch (phy->usecase) {
case MSM_DSI_PHY_STANDALONE:
break;
case MSM_DSI_PHY_MASTER:
-   pll_10nm->slave = pll_10nm_list[(pll_10nm->id + 1) % DSI_MAX];
+   pll_10nm->slave = pll_10nm_list[(pll_10nm->phy->id + 1) % 
DSI_MAX];
break;
case MSM_DSI_PHY_SLAVE:
data = 0x1; /* external PLL */
@@ -626,21 +623,21 @@ static int pll_10nm_register(struct dsi_pll_10nm 
*pll_10nm, struct clk_hw **prov
.flags = CLK_IGNORE_UNUSED,
.ops = &clk

[Freedreno] [PATCH v4 22/24] drm/msm/dsi: remove temp data from global pll structure

2021-03-31 Thread Dmitry Baryshkov
The 7nm, 10nm and 14nm drivers would store interim data used during
VCO/PLL rate setting in the global dsi_pll_Nnm structure. Move this data
structures to the onstack storage. While we are at it, drop
unused/static 'config' data, unused config fields, etc.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c | 167 ---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c | 334 +++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c  | 166 --
 3 files changed, 220 insertions(+), 447 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index 34c1c216e738..655996cf8688 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -36,43 +36,25 @@
  */
 
 #define VCO_REF_CLK_RATE   1920
-
-struct dsi_pll_regs {
-   u32 pll_prop_gain_rate;
-   u32 pll_lockdet_rate;
-   u32 decimal_div_start;
-   u32 frac_div_start_low;
-   u32 frac_div_start_mid;
-   u32 frac_div_start_high;
-   u32 pll_clock_inverters;
-   u32 ssc_stepsize_low;
-   u32 ssc_stepsize_high;
-   u32 ssc_div_per_low;
-   u32 ssc_div_per_high;
-   u32 ssc_adjper_low;
-   u32 ssc_adjper_high;
-   u32 ssc_control;
-};
+#define FRAC_BITS 18
 
 /* v3.0.0 10nm implementation that requires the old timings settings */
 #define DSI_PHY_10NM_QUIRK_OLD_TIMINGS BIT(0)
 
 struct dsi_pll_config {
-   u32 ref_freq;
-   bool div_override;
-   u32 output_div;
-   bool ignore_frac;
-   bool disable_prescaler;
bool enable_ssc;
bool ssc_center;
-   u32 dec_bits;
-   u32 frac_bits;
-   u32 lock_timer;
u32 ssc_freq;
u32 ssc_offset;
u32 ssc_adj_per;
-   u32 thresh_cycles;
-   u32 refclk_cycles;
+
+   /* out */
+   u32 pll_prop_gain_rate;
+   u32 decimal_div_start;
+   u32 frac_div_start;
+   u32 pll_clock_inverters;
+   u32 ssc_stepsize;
+   u32 ssc_div_per;
 };
 
 struct pll_10nm_cached_state {
@@ -88,15 +70,11 @@ struct dsi_pll_10nm {
 
struct msm_dsi_phy *phy;
 
-   u64 vco_ref_clk_rate;
u64 vco_current_rate;
 
/* protects REG_DSI_10nm_PHY_CMN_CLK_CFG0 register */
spinlock_t postdiv_lock;
 
-   struct dsi_pll_config pll_configuration;
-   struct dsi_pll_regs reg_setup;
-
struct pll_10nm_cached_state cached_state;
 
struct dsi_pll_10nm *slave;
@@ -110,34 +88,19 @@ struct dsi_pll_10nm {
  */
 static struct dsi_pll_10nm *pll_10nm_list[DSI_MAX];
 
-static void dsi_pll_setup_config(struct dsi_pll_10nm *pll)
+static void dsi_pll_setup_config(struct dsi_pll_config *config)
 {
-   struct dsi_pll_config *config = &pll->pll_configuration;
-
-   config->ref_freq = pll->vco_ref_clk_rate;
-   config->output_div = 1;
-   config->dec_bits = 8;
-   config->frac_bits = 18;
-   config->lock_timer = 64;
config->ssc_freq = 31500;
config->ssc_offset = 5000;
config->ssc_adj_per = 2;
-   config->thresh_cycles = 32;
-   config->refclk_cycles = 256;
-
-   config->div_override = false;
-   config->ignore_frac = false;
-   config->disable_prescaler = false;
 
config->enable_ssc = false;
-   config->ssc_center = 0;
+   config->ssc_center = false;
 }
 
-static void dsi_pll_calc_dec_frac(struct dsi_pll_10nm *pll)
+static void dsi_pll_calc_dec_frac(struct dsi_pll_10nm *pll, struct 
dsi_pll_config *config)
 {
-   struct dsi_pll_config *config = &pll->pll_configuration;
-   struct dsi_pll_regs *regs = &pll->reg_setup;
-   u64 fref = pll->vco_ref_clk_rate;
+   u64 fref = VCO_REF_CLK_RATE;
u64 pll_freq;
u64 divider;
u64 dec, dec_multiple;
@@ -146,40 +109,32 @@ static void dsi_pll_calc_dec_frac(struct dsi_pll_10nm 
*pll)
 
pll_freq = pll->vco_current_rate;
 
-   if (config->disable_prescaler)
-   divider = fref;
-   else
-   divider = fref * 2;
+   divider = fref * 2;
 
-   multiplier = 1 << config->frac_bits;
+   multiplier = 1 << FRAC_BITS;
dec_multiple = div_u64(pll_freq * multiplier, divider);
dec = div_u64_rem(dec_multiple, multiplier, &frac);
 
if (pll_freq <= 19UL)
-   regs->pll_prop_gain_rate = 8;
+   config->pll_prop_gain_rate = 8;
else if (pll_freq <= 30UL)
-   regs->pll_prop_gain_rate = 10;
+   config->pll_prop_gain_rate = 10;
else
-   regs->pll_prop_gain_rate = 12;
+   config->pll_prop_gain_rate = 12;
if (pll_freq < 11UL)
-   regs->pll_clock_inverters = 8;
+   config->pll_clock_inverters = 8;
else
-   regs->pll_clock_inverters = 0;
+   config->pll_clock_inverters = 0;

[Freedreno] [PATCH v4 17/24] drm/msm/dsi: make save_state/restore_state callbacks accept msm_dsi_phy

2021-03-31 Thread Dmitry Baryshkov
Make save_state/restore callbacks accept struct msm_dsi_phy rather than
struct msm_dsi_pll. This moves them to struct msm_dsi_phy_ops, allowing
us to drop struct msm_dsi_pll_ops.

Signed-off-by: Dmitry Baryshkov 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c | 12 +++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h | 11 +++---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c| 24 ++---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c| 24 ++---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c| 34 ---
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   | 18 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c | 24 ++---
 7 files changed, 64 insertions(+), 83 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index a1360e2dad3b..2c5ccead3baa 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -858,9 +858,9 @@ int msm_dsi_phy_get_clk_provider(struct msm_dsi_phy *phy,
 
 void msm_dsi_phy_pll_save_state(struct msm_dsi_phy *phy)
 {
-   if (phy->cfg->pll_ops.save_state) {
-   phy->cfg->pll_ops.save_state(phy->pll);
-   phy->pll->state_saved = true;
+   if (phy->cfg->ops.save_pll_state) {
+   phy->cfg->ops.save_pll_state(phy);
+   phy->state_saved = true;
}
 }
 
@@ -868,12 +868,12 @@ int msm_dsi_phy_pll_restore_state(struct msm_dsi_phy *phy)
 {
int ret;
 
-   if (phy->cfg->pll_ops.restore_state && phy->pll->state_saved) {
-   ret = phy->cfg->pll_ops.restore_state(phy->pll);
+   if (phy->cfg->ops.restore_pll_state && phy->state_saved) {
+   ret = phy->cfg->ops.restore_pll_state(phy);
if (ret)
return ret;
 
-   phy->pll->state_saved = false;
+   phy->state_saved = false;
}
 
return 0;
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index b477d21804c8..0b51828c3146 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -17,7 +17,6 @@
 struct msm_dsi_pll {
struct clk_hw   clk_hw;
boolpll_on;
-   boolstate_saved;
 
const struct msm_dsi_phy_cfg *cfg;
 };
@@ -29,17 +28,13 @@ struct msm_dsi_phy_ops {
int (*enable)(struct msm_dsi_phy *phy, int src_pll_id,
struct msm_dsi_phy_clk_request *clk_req);
void (*disable)(struct msm_dsi_phy *phy);
-};
-
-struct msm_dsi_pll_ops {
-   void (*save_state)(struct msm_dsi_pll *pll);
-   int (*restore_state)(struct msm_dsi_pll *pll);
+   void (*save_pll_state)(struct msm_dsi_phy *phy);
+   int (*restore_pll_state)(struct msm_dsi_phy *phy);
 };
 
 struct msm_dsi_phy_cfg {
struct dsi_reg_config reg_cfg;
struct msm_dsi_phy_ops ops;
-   const struct msm_dsi_pll_ops pll_ops;
 
unsigned long   min_pll_rate;
unsigned long   max_pll_rate;
@@ -115,6 +110,8 @@ struct msm_dsi_phy {
struct msm_dsi_pll *pll;
 
struct clk_hw_onecell_data *provided_clocks;
+
+   bool state_saved;
 };
 
 /*
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index 91ae0f8dbd88..fefff08f83fd 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -518,9 +518,9 @@ static const struct clk_ops clk_ops_dsi_pll_10nm_vco = {
  * PLL Callbacks
  */
 
-static void dsi_pll_10nm_save_state(struct msm_dsi_pll *pll)
+static void dsi_10nm_pll_save_state(struct msm_dsi_phy *phy)
 {
-   struct dsi_pll_10nm *pll_10nm = to_pll_10nm(pll);
+   struct dsi_pll_10nm *pll_10nm = to_pll_10nm(phy->pll);
struct pll_10nm_cached_state *cached = &pll_10nm->cached_state;
void __iomem *phy_base = pll_10nm->phy_cmn_mmio;
u32 cmn_clk_cfg0, cmn_clk_cfg1;
@@ -541,9 +541,9 @@ static void dsi_pll_10nm_save_state(struct msm_dsi_pll *pll)
cached->pix_clk_div, cached->pll_mux);
 }
 
-static int dsi_pll_10nm_restore_state(struct msm_dsi_pll *pll)
+static int dsi_10nm_pll_restore_state(struct msm_dsi_phy *phy)
 {
-   struct dsi_pll_10nm *pll_10nm = to_pll_10nm(pll);
+   struct dsi_pll_10nm *pll_10nm = to_pll_10nm(phy->pll);
struct pll_10nm_cached_state *cached = &pll_10nm->cached_state;
void __iomem *phy_base = pll_10nm->phy_cmn_mmio;
u32 val;
@@ -562,7 +562,9 @@ static int dsi_pll_10nm_restore_state(struct msm_dsi_pll 
*pll)
val |= cached->pll_mux;
pll_write(phy_base + REG_DSI_10nm_PHY_CMN_CLK_CFG1, val);
 
-   ret = dsi_pll_10nm_vco_set_rate(&pll->clk_hw, 
pll_10nm->vco_current_rate, pll_10nm->vco_ref_clk_rate);
+   ret = dsi_pll_10nm_vco_set_rate(&phy->pll->clk_hw,
+   pll_10nm->vco_current_rate,
+   pll_10nm->vco_

[Freedreno] [PATCH v4 19/24] drm/msm/dsi: drop PLL accessor functions

2021-03-31 Thread Dmitry Baryshkov
Replace PLL accessor functions (pll_read/pll_write*) with the DSI PHY
accessors, reducing duplication.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h |  24 +--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c| 124 
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c| 126 
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c| 118 +++
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   |  54 +++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c | 140 +-
 6 files changed, 283 insertions(+), 303 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index e80560f38d80..5a72b030376b 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -14,6 +14,8 @@
 
 #define dsi_phy_read(offset) msm_readl((offset))
 #define dsi_phy_write(offset, data) msm_writel((data), (offset))
+#define dsi_phy_write_udelay(offset, data, delay_us) { msm_writel((data), 
(offset)); udelay(delay_us); }
+#define dsi_phy_write_ndelay(offset, data, delay_ns) { msm_writel((data), 
(offset)); ndelay(delay_ns); }
 
 struct msm_dsi_phy_ops {
int (*pll_init)(struct msm_dsi_phy *phy);
@@ -120,27 +122,5 @@ int msm_dsi_dphy_timing_calc_v4(struct msm_dsi_dphy_timing 
*timing,
struct msm_dsi_phy_clk_request *clk_req);
 void msm_dsi_phy_set_src_pll(struct msm_dsi_phy *phy, int pll_id, u32 reg,
u32 bit_mask);
-/* PLL accessors */
-static inline void pll_write(void __iomem *reg, u32 data)
-{
-   msm_writel(data, reg);
-}
-
-static inline u32 pll_read(const void __iomem *reg)
-{
-   return msm_readl(reg);
-}
-
-static inline void pll_write_udelay(void __iomem *reg, u32 data, u32 delay_us)
-{
-   pll_write(reg, data);
-   udelay(delay_us);
-}
-
-static inline void pll_write_ndelay(void __iomem *reg, u32 data, u32 delay_ns)
-{
-   pll_write((reg), data);
-   ndelay(delay_ns);
-}
 
 #endif /* __DSI_PHY_H__ */
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index cbf3d64d5efb..2b188c6a0a7a 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -234,19 +234,19 @@ static void dsi_pll_ssc_commit(struct dsi_pll_10nm *pll)
if (pll->pll_configuration.enable_ssc) {
pr_debug("SSC is enabled\n");
 
-   pll_write(base + REG_DSI_10nm_PHY_PLL_SSC_STEPSIZE_LOW_1,
+   dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_SSC_STEPSIZE_LOW_1,
  regs->ssc_stepsize_low);
-   pll_write(base + REG_DSI_10nm_PHY_PLL_SSC_STEPSIZE_HIGH_1,
+   dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_SSC_STEPSIZE_HIGH_1,
  regs->ssc_stepsize_high);
-   pll_write(base + REG_DSI_10nm_PHY_PLL_SSC_DIV_PER_LOW_1,
+   dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_SSC_DIV_PER_LOW_1,
  regs->ssc_div_per_low);
-   pll_write(base + REG_DSI_10nm_PHY_PLL_SSC_DIV_PER_HIGH_1,
+   dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_SSC_DIV_PER_HIGH_1,
  regs->ssc_div_per_high);
-   pll_write(base + REG_DSI_10nm_PHY_PLL_SSC_DIV_ADJPER_LOW_1,
+   dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_SSC_DIV_ADJPER_LOW_1,
  regs->ssc_adjper_low);
-   pll_write(base + REG_DSI_10nm_PHY_PLL_SSC_DIV_ADJPER_HIGH_1,
+   dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_SSC_DIV_ADJPER_HIGH_1,
  regs->ssc_adjper_high);
-   pll_write(base + REG_DSI_10nm_PHY_PLL_SSC_CONTROL,
+   dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_SSC_CONTROL,
  SSC_EN | regs->ssc_control);
}
 }
@@ -255,26 +255,26 @@ static void dsi_pll_config_hzindep_reg(struct 
dsi_pll_10nm *pll)
 {
void __iomem *base = pll->mmio;
 
-   pll_write(base + REG_DSI_10nm_PHY_PLL_ANALOG_CONTROLS_ONE, 0x80);
-   pll_write(base + REG_DSI_10nm_PHY_PLL_ANALOG_CONTROLS_TWO, 0x03);
-   pll_write(base + REG_DSI_10nm_PHY_PLL_ANALOG_CONTROLS_THREE, 0x00);
-   pll_write(base + REG_DSI_10nm_PHY_PLL_DSM_DIVIDER, 0x00);
-   pll_write(base + REG_DSI_10nm_PHY_PLL_FEEDBACK_DIVIDER, 0x4e);
-   pll_write(base + REG_DSI_10nm_PHY_PLL_CALIBRATION_SETTINGS, 0x40);
-   pll_write(base + REG_DSI_10nm_PHY_PLL_BAND_SEL_CAL_SETTINGS_THREE,
+   dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_ANALOG_CONTROLS_ONE, 0x80);
+   dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_ANALOG_CONTROLS_TWO, 0x03);
+   dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_ANALOG_CONTROLS_THREE, 0x00);
+   dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_DSM_DIVIDER, 0x00);
+   dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_FEEDBACK_DIVIDER, 0x4e

[Freedreno] [PATCH v4 23/24] drm/msm/dsi: inline msm_dsi_phy_set_src_pll

2021-03-31 Thread Dmitry Baryshkov
The src_truthtable config is not used for some of phys, which use other
means of configuring the master/slave usecases. Inline this function
with the goal of removing src_pll_id argument in the next commit.

Signed-off-by: Dmitry Baryshkov 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c   | 17 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h   |  8 
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c  |  2 --
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c  | 13 +++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c  | 11 +++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c  | 13 +++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c |  1 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c   |  2 --
 8 files changed, 21 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index 344887025720..93e81bb78d26 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -461,23 +461,6 @@ int msm_dsi_dphy_timing_calc_v4(struct msm_dsi_dphy_timing 
*timing,
return 0;
 }
 
-void msm_dsi_phy_set_src_pll(struct msm_dsi_phy *phy, int pll_id, u32 reg,
-   u32 bit_mask)
-{
-   int phy_id = phy->id;
-   u32 val;
-
-   if ((phy_id >= DSI_MAX) || (pll_id >= DSI_MAX))
-   return;
-
-   val = dsi_phy_read(phy->base + reg);
-
-   if (phy->cfg->src_pll_truthtable[phy_id][pll_id])
-   dsi_phy_write(phy->base + reg, val | bit_mask);
-   else
-   dsi_phy_write(phy->base + reg, val & (~bit_mask));
-}
-
 static int dsi_phy_regulator_init(struct msm_dsi_phy *phy)
 {
struct regulator_bulk_data *s = phy->supplies;
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index 7748f8b5ea53..00ef01baaebd 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -33,12 +33,6 @@ struct msm_dsi_phy_cfg {
unsigned long   min_pll_rate;
unsigned long   max_pll_rate;
 
-   /*
-* Each cell {phy_id, pll_id} of the truth table indicates
-* if the source PLL selection bit should be set for each PHY.
-* Fill default H/W values in illegal cells, eg. cell {0, 1}.
-*/
-   bool src_pll_truthtable[DSI_MAX][DSI_MAX];
const resource_size_t io_start[DSI_MAX];
const int num_dsi_phy;
const int quirks;
@@ -121,7 +115,5 @@ int msm_dsi_dphy_timing_calc_v3(struct msm_dsi_dphy_timing 
*timing,
struct msm_dsi_phy_clk_request *clk_req);
 int msm_dsi_dphy_timing_calc_v4(struct msm_dsi_dphy_timing *timing,
struct msm_dsi_phy_clk_request *clk_req);
-void msm_dsi_phy_set_src_pll(struct msm_dsi_phy *phy, int pll_id, u32 reg,
-   u32 bit_mask);
 
 #endif /* __DSI_PHY_H__ */
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index 655996cf8688..64b8b0efc1a4 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -921,7 +921,6 @@ static void dsi_10nm_phy_disable(struct msm_dsi_phy *phy)
 }
 
 const struct msm_dsi_phy_cfg dsi_phy_10nm_cfgs = {
-   .src_pll_truthtable = { {false, false}, {true, false} },
.has_phy_lane = true,
.reg_cfg = {
.num = 1,
@@ -943,7 +942,6 @@ const struct msm_dsi_phy_cfg dsi_phy_10nm_cfgs = {
 };
 
 const struct msm_dsi_phy_cfg dsi_phy_10nm_8998_cfgs = {
-   .src_pll_truthtable = { {false, false}, {true, false} },
.has_phy_lane = true,
.reg_cfg = {
.num = 1,
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
index 090d3e7a2212..9a2937589435 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
@@ -947,6 +947,7 @@ static int dsi_14nm_phy_enable(struct msm_dsi_phy *phy, int 
src_pll_id,
int ret;
void __iomem *base = phy->base;
void __iomem *lane_base = phy->lane_base;
+   u32 glbl_test_ctrl;
 
if (msm_dsi_dphy_timing_calc_v2(timing, clk_req)) {
DRM_DEV_ERROR(&phy->pdev->dev,
@@ -994,10 +995,12 @@ static int dsi_14nm_phy_enable(struct msm_dsi_phy *phy, 
int src_pll_id,
udelay(100);
dsi_phy_write(base + REG_DSI_14nm_PHY_CMN_CTRL_1, 0x00);
 
-   msm_dsi_phy_set_src_pll(phy, src_pll_id,
-   REG_DSI_14nm_PHY_CMN_GLBL_TEST_CTRL,
-   DSI_14nm_PHY_CMN_GLBL_TEST_CTRL_BITCLK_HS_SEL);
-
+   glbl_test_ctrl = dsi_phy_read(base + 
REG_DSI_14nm_PHY_CMN_GLBL_TEST_CTRL);
+   if (phy->id == DSI_1 && src_pll_id == DSI_0)
+   glbl_test_ctrl |= DSI_14nm_PHY_CMN_GLBL_TEST_CTRL_BITCLK_HS_SEL;
+   else
+   g

[Freedreno] [PATCH v4 20/24] drm/msm/dsi: move ioremaps to dsi_phy_driver_probe

2021-03-31 Thread Dmitry Baryshkov
All PHY drivers would map dsi_pll area. Some PHY drivers would also
map dsi_phy area again (a leftover from old PHY/PLL separation). Move
all ioremaps to the common dsi_phy driver code and drop individual
ioremapped areas from PHY drivers.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c |  7 ++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h |  1 +
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c| 75 +++---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c| 49 
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c| 33 +++-
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   | 27 +++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c | 79 ---
 7 files changed, 108 insertions(+), 163 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index 2c5ccead3baa..344887025720 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -682,6 +682,13 @@ static int dsi_phy_driver_probe(struct platform_device 
*pdev)
goto fail;
}
 
+   phy->pll_base = msm_ioremap(pdev, "dsi_pll", "DSI_PLL");
+   if (IS_ERR(phy->pll_base)) {
+   DRM_DEV_ERROR(&pdev->dev, "%s: failed to map pll base\n", 
__func__);
+   ret = -ENOMEM;
+   goto fail;
+   }
+
if (phy->cfg->has_phy_lane) {
phy->lane_base = msm_ioremap(pdev, "dsi_phy_lane", 
"DSI_PHY_LANE");
if (IS_ERR(phy->lane_base)) {
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index 5a72b030376b..7748f8b5ea53 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -88,6 +88,7 @@ struct msm_dsi_dphy_timing {
 struct msm_dsi_phy {
struct platform_device *pdev;
void __iomem *base;
+   void __iomem *pll_base;
void __iomem *reg_base;
void __iomem *lane_base;
int id;
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index 2b188c6a0a7a..ef92c3f38a9a 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -91,9 +91,6 @@ struct dsi_pll_10nm {
 
struct msm_dsi_phy *phy;
 
-   void __iomem *phy_cmn_mmio;
-   void __iomem *mmio;
-
u64 vco_ref_clk_rate;
u64 vco_current_rate;
 
@@ -228,7 +225,7 @@ static void dsi_pll_calc_ssc(struct dsi_pll_10nm *pll)
 
 static void dsi_pll_ssc_commit(struct dsi_pll_10nm *pll)
 {
-   void __iomem *base = pll->mmio;
+   void __iomem *base = pll->phy->pll_base;
struct dsi_pll_regs *regs = &pll->reg_setup;
 
if (pll->pll_configuration.enable_ssc) {
@@ -253,7 +250,7 @@ static void dsi_pll_ssc_commit(struct dsi_pll_10nm *pll)
 
 static void dsi_pll_config_hzindep_reg(struct dsi_pll_10nm *pll)
 {
-   void __iomem *base = pll->mmio;
+   void __iomem *base = pll->phy->pll_base;
 
dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_ANALOG_CONTROLS_ONE, 0x80);
dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_ANALOG_CONTROLS_TWO, 0x03);
@@ -279,7 +276,7 @@ static void dsi_pll_config_hzindep_reg(struct dsi_pll_10nm 
*pll)
 
 static void dsi_pll_commit(struct dsi_pll_10nm *pll)
 {
-   void __iomem *base = pll->mmio;
+   void __iomem *base = pll->phy->pll_base;
struct dsi_pll_regs *reg = &pll->reg_setup;
 
dsi_phy_write(base + REG_DSI_10nm_PHY_PLL_CORE_INPUT_OVERRIDE, 0x12);
@@ -336,7 +333,7 @@ static int dsi_pll_10nm_lock_status(struct dsi_pll_10nm 
*pll)
u32 const delay_us = 100;
u32 const timeout_us = 5000;
 
-   rc = readl_poll_timeout_atomic(pll->mmio +
+   rc = readl_poll_timeout_atomic(pll->phy->pll_base +
   REG_DSI_10nm_PHY_PLL_COMMON_STATUS_ONE,
   status,
   ((status & BIT(0)) > 0),
@@ -351,21 +348,21 @@ static int dsi_pll_10nm_lock_status(struct dsi_pll_10nm 
*pll)
 
 static void dsi_pll_disable_pll_bias(struct dsi_pll_10nm *pll)
 {
-   u32 data = dsi_phy_read(pll->phy_cmn_mmio + 
REG_DSI_10nm_PHY_CMN_CTRL_0);
+   u32 data = dsi_phy_read(pll->phy->base + REG_DSI_10nm_PHY_CMN_CTRL_0);
 
-   dsi_phy_write(pll->mmio + REG_DSI_10nm_PHY_PLL_SYSTEM_MUXES, 0);
-   dsi_phy_write(pll->phy_cmn_mmio + REG_DSI_10nm_PHY_CMN_CTRL_0,
+   dsi_phy_write(pll->phy->pll_base + REG_DSI_10nm_PHY_PLL_SYSTEM_MUXES, 
0);
+   dsi_phy_write(pll->phy->base + REG_DSI_10nm_PHY_CMN_CTRL_0,
  data & ~BIT(5));
ndelay(250);
 }
 
 static void dsi_pll_enable_pll_bias(struct dsi_pll_10nm *pll)
 {
-   u32 data = dsi_phy_read(pll->phy_cmn_mmio + 
REG_DSI_10nm_PHY_CMN_CTRL_0);
+   u32 data = dsi_phy_read(pll->phy->base + REG_DSI_10nm_PHY_CMN_CTRL_0);
 
-   dsi_phy_write

[Freedreno] [PATCH v4 15/24] drm/msm/dsi: simplify vco_delay handling in dsi_phy_28nm driver

2021-03-31 Thread Dmitry Baryshkov
Instead of setting the variable and then using it just in the one place,
determine vco_delay directly at the PLL configuration time.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c | 12 
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
index 3e9b7949b038..ed369eb18e9d 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
@@ -72,8 +72,6 @@ struct dsi_pll_28nm {
struct platform_device *pdev;
void __iomem *mmio;
 
-   int vco_delay;
-
struct pll_28nm_cached_state cached_state;
 };
 
@@ -212,8 +210,10 @@ static int dsi_pll_28nm_clk_set_rate(struct clk_hw *hw, 
unsigned long rate,
pll_write(base + REG_DSI_28nm_PHY_PLL_SDM_CFG4, 0x00);
 
/* Add hardware recommended delay for correct PLL configuration */
-   if (pll_28nm->vco_delay)
-   udelay(pll_28nm->vco_delay);
+   if (pll->cfg->quirks & DSI_PHY_28NM_QUIRK_PHY_LP)
+   udelay(1000);
+   else
+   udelay(1);
 
pll_write(base + REG_DSI_28nm_PHY_PLL_REFCLK_CFG, refclk_cfg);
pll_write(base + REG_DSI_28nm_PHY_PLL_PWRGEN_CFG, 0x00);
@@ -580,10 +580,6 @@ static int dsi_pll_28nm_init(struct msm_dsi_phy *phy)
 
pll = &pll_28nm->base;
pll->cfg = phy->cfg;
-   if (phy->cfg->quirks & DSI_PHY_28NM_QUIRK_PHY_LP)
-   pll_28nm->vco_delay = 1000;
-   else
-   pll_28nm->vco_delay = 1;
 
ret = pll_28nm_register(pll_28nm, phy->provided_clocks->hws);
if (ret) {
-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH v4 06/24] drm/msm/dsi: move all PLL callbacks into PHY config struct

2021-03-31 Thread Dmitry Baryshkov
Move all PLL-related callbacks into struct msm_dsi_phy_cfg. This limits
the amount of data in the struct msm_dsi_pll.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/dsi.h |  6 --
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c | 14 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h | 15 +
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c| 38 ---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c| 47 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c| 65 --
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   | 33 +
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c | 49 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.c | 67 ---
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.h | 67 +--
 10 files changed, 191 insertions(+), 210 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi.h b/drivers/gpu/drm/msm/dsi/dsi.h
index 78ef5d4ed922..21cf883fb6f1 100644
--- a/drivers/gpu/drm/msm/dsi/dsi.h
+++ b/drivers/gpu/drm/msm/dsi/dsi.h
@@ -107,8 +107,6 @@ struct drm_encoder *msm_dsi_get_encoder(struct msm_dsi 
*msm_dsi);
 /* dsi pll */
 struct msm_dsi_pll;
 #ifdef CONFIG_DRM_MSM_DSI_PLL
-struct msm_dsi_pll *msm_dsi_pll_init(struct platform_device *pdev,
-   enum msm_dsi_phy_type type, int dsi_id);
 void msm_dsi_pll_destroy(struct msm_dsi_pll *pll);
 int msm_dsi_pll_get_clk_provider(struct msm_dsi_pll *pll,
struct clk **byte_clk_provider, struct clk **pixel_clk_provider);
@@ -117,10 +115,6 @@ int msm_dsi_pll_restore_state(struct msm_dsi_pll *pll);
 int msm_dsi_pll_set_usecase(struct msm_dsi_pll *pll,
enum msm_dsi_phy_usecase uc);
 #else
-static inline struct msm_dsi_pll *msm_dsi_pll_init(struct platform_device 
*pdev,
-enum msm_dsi_phy_type type, int id) {
-   return ERR_PTR(-ENODEV);
-}
 static inline void msm_dsi_pll_destroy(struct msm_dsi_pll *pll)
 {
 }
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index 83eb0a630443..5f153b683521 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -709,12 +709,14 @@ static int dsi_phy_driver_probe(struct platform_device 
*pdev)
if (ret)
goto fail;
 
-   phy->pll = msm_dsi_pll_init(pdev, phy->cfg->type, phy->id);
-   if (IS_ERR_OR_NULL(phy->pll)) {
-   DRM_DEV_INFO(dev,
-   "%s: pll init failed: %ld, need separate pll clk 
driver\n",
-   __func__, PTR_ERR(phy->pll));
-   phy->pll = NULL;
+   if (phy->cfg->ops.pll_init) {
+   ret = phy->cfg->ops.pll_init(phy);
+   if (ret) {
+   DRM_DEV_INFO(dev,
+   "%s: pll init failed: %d, need separate pll clk 
driver\n",
+   __func__, ret);
+   goto fail;
+   }
}
 
dsi_phy_disable_resource(phy);
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index 03dfb08e7128..244d2c900d40 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -17,15 +17,30 @@
 #define V3_0_0_10NM_OLD_TIMINGS_QUIRK  BIT(0)
 
 struct msm_dsi_phy_ops {
+   int (*pll_init)(struct msm_dsi_phy *phy);
int (*enable)(struct msm_dsi_phy *phy, int src_pll_id,
struct msm_dsi_phy_clk_request *clk_req);
void (*disable)(struct msm_dsi_phy *phy);
 };
 
+struct msm_dsi_pll_ops {
+   int (*enable_seq)(struct msm_dsi_pll *pll);
+   void (*disable_seq)(struct msm_dsi_pll *pll);
+   int (*get_provider)(struct msm_dsi_pll *pll,
+   struct clk **byte_clk_provider,
+   struct clk **pixel_clk_provider);
+   void (*destroy)(struct msm_dsi_pll *pll);
+   void (*save_state)(struct msm_dsi_pll *pll);
+   int (*restore_state)(struct msm_dsi_pll *pll);
+   int (*set_usecase)(struct msm_dsi_pll *pll,
+  enum msm_dsi_phy_usecase uc);
+};
+
 struct msm_dsi_phy_cfg {
enum msm_dsi_phy_type type;
struct dsi_reg_config reg_cfg;
struct msm_dsi_phy_ops ops;
+   const struct msm_dsi_pll_ops pll_ops;
 
/*
 * Each cell {phy_id, pll_id} of the truth table indicates
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index 5da369b5c475..f697ff9a0d8e 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -828,15 +828,17 @@ static int pll_10nm_register(struct dsi_pll_10nm 
*pll_10nm)
return ret;
 }
 
-struct msm_dsi_pll *msm_dsi_pll_10nm_init(struct platform_device *pdev, int id)
+static int dsi_pll_10nm_init(struct msm_dsi_phy *phy)
 {
+   struct platform_d

[Freedreno] [PATCH v4 12/24] drm/msm/dsi: use devm_of_clk_add_hw_provider

2021-03-31 Thread Dmitry Baryshkov
Use devm_of_clk_add_hw_provider() to register provided clocks. This
allows dropping the remove function alltogether.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c | 22 +-
 1 file changed, 1 insertion(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index df3b91b0ea88..46561435a27d 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -728,7 +728,7 @@ static int dsi_phy_driver_probe(struct platform_device 
*pdev)
}
}
 
-   ret = of_clk_add_hw_provider(dev->of_node, of_clk_hw_onecell_get,
+   ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_onecell_get,
 phy->provided_clocks);
if (ret) {
DRM_DEV_ERROR(dev, "%s: failed to register clk provider: %d\n", 
__func__, ret);
@@ -742,31 +742,11 @@ static int dsi_phy_driver_probe(struct platform_device 
*pdev)
return 0;
 
 fail:
-   if (phy->pll) {
-   of_clk_del_provider(dev->of_node);
-   phy->pll = NULL;
-   }
-
return ret;
 }
 
-static int dsi_phy_driver_remove(struct platform_device *pdev)
-{
-   struct msm_dsi_phy *phy = platform_get_drvdata(pdev);
-
-   if (phy && phy->pll) {
-   of_clk_del_provider(pdev->dev.of_node);
-   phy->pll = NULL;
-   }
-
-   platform_set_drvdata(pdev, NULL);
-
-   return 0;
-}
-
 static struct platform_driver dsi_phy_platform_driver = {
.probe  = dsi_phy_driver_probe,
-   .remove = dsi_phy_driver_remove,
.driver = {
.name   = "msm_dsi_phy",
.of_match_table = dsi_phy_dt_match,
-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH v4 13/24] drm/msm/dsi: make save/restore_state phy-level functions

2021-03-31 Thread Dmitry Baryshkov
Morph msm_dsi_pll_save/restore_state() into msm_dsi_phy_save/restore_state(),
thus removing last bits of knowledge about msm_dsi_pll from dsi_manager.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/dsi.h  | 18 ++-
 drivers/gpu/drm/msm/dsi/dsi_manager.c  |  6 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c  | 35 +++---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h  | 11 +++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c |  2 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c  |  2 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.c  | 26 
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.h  | 11 ---
 8 files changed, 42 insertions(+), 69 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi.h b/drivers/gpu/drm/msm/dsi/dsi.h
index 0970f05cd47f..7f99e12efd52 100644
--- a/drivers/gpu/drm/msm/dsi/dsi.h
+++ b/drivers/gpu/drm/msm/dsi/dsi.h
@@ -92,21 +92,6 @@ static inline bool msm_dsi_device_connected(struct msm_dsi 
*msm_dsi)
 
 struct drm_encoder *msm_dsi_get_encoder(struct msm_dsi *msm_dsi);
 
-/* dsi pll */
-struct msm_dsi_pll;
-#ifdef CONFIG_DRM_MSM_DSI_PLL
-void msm_dsi_pll_save_state(struct msm_dsi_pll *pll);
-int msm_dsi_pll_restore_state(struct msm_dsi_pll *pll);
-#else
-static inline void msm_dsi_pll_save_state(struct msm_dsi_pll *pll)
-{
-}
-static inline int msm_dsi_pll_restore_state(struct msm_dsi_pll *pll)
-{
-   return 0;
-}
-#endif
-
 /* dsi host */
 struct msm_dsi_host;
 int msm_dsi_host_xfer_prepare(struct mipi_dsi_host *host,
@@ -182,11 +167,12 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy, int 
src_pll_id,
 void msm_dsi_phy_disable(struct msm_dsi_phy *phy);
 void msm_dsi_phy_get_shared_timings(struct msm_dsi_phy *phy,
struct msm_dsi_phy_shared_timings *shared_timing);
-struct msm_dsi_pll *msm_dsi_phy_get_pll(struct msm_dsi_phy *phy);
 void msm_dsi_phy_set_usecase(struct msm_dsi_phy *phy,
 enum msm_dsi_phy_usecase uc);
 int msm_dsi_phy_get_clk_provider(struct msm_dsi_phy *phy,
struct clk **byte_clk_provider, struct clk **pixel_clk_provider);
+void msm_dsi_phy_pll_save_state(struct msm_dsi_phy *phy);
+int msm_dsi_phy_pll_restore_state(struct msm_dsi_phy *phy);
 
 #endif /* __DSI_CONNECTOR_H__ */
 
diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c 
b/drivers/gpu/drm/msm/dsi/dsi_manager.c
index 86e36be58701..e116e5ff5d24 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
@@ -498,7 +498,6 @@ static void dsi_mgr_bridge_post_disable(struct drm_bridge 
*bridge)
struct msm_dsi *msm_dsi1 = dsi_mgr_get_dsi(DSI_1);
struct mipi_dsi_host *host = msm_dsi->host;
struct drm_panel *panel = msm_dsi->panel;
-   struct msm_dsi_pll *src_pll;
bool is_dual_dsi = IS_DUAL_DSI();
int ret;
 
@@ -532,9 +531,8 @@ static void dsi_mgr_bridge_post_disable(struct drm_bridge 
*bridge)
id, ret);
}
 
-   /* Save PLL status if it is a clock source */
-   src_pll = msm_dsi_phy_get_pll(msm_dsi->phy);
-   msm_dsi_pll_save_state(src_pll);
+   /* Save PHY status if it is a clock source */
+   msm_dsi_phy_pll_save_state(msm_dsi->phy);
 
ret = msm_dsi_host_power_off(host);
if (ret)
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index 46561435a27d..a1360e2dad3b 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -799,9 +799,9 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy, int 
src_pll_id,
 * source.
 */
if (phy->usecase != MSM_DSI_PHY_SLAVE) {
-   ret = msm_dsi_pll_restore_state(phy->pll);
+   ret = msm_dsi_phy_pll_restore_state(phy);
if (ret) {
-   DRM_DEV_ERROR(dev, "%s: failed to restore pll state, 
%d\n",
+   DRM_DEV_ERROR(dev, "%s: failed to restore phy state, 
%d\n",
__func__, ret);
goto pll_restor_fail;
}
@@ -838,14 +838,6 @@ void msm_dsi_phy_get_shared_timings(struct msm_dsi_phy 
*phy,
   sizeof(*shared_timings));
 }
 
-struct msm_dsi_pll *msm_dsi_phy_get_pll(struct msm_dsi_phy *phy)
-{
-   if (!phy)
-   return NULL;
-
-   return phy->pll;
-}
-
 void msm_dsi_phy_set_usecase(struct msm_dsi_phy *phy,
 enum msm_dsi_phy_usecase uc)
 {
@@ -863,3 +855,26 @@ int msm_dsi_phy_get_clk_provider(struct msm_dsi_phy *phy,
 
return -EINVAL;
 }
+
+void msm_dsi_phy_pll_save_state(struct msm_dsi_phy *phy)
+{
+   if (phy->cfg->pll_ops.save_state) {
+   phy->cfg->pll_ops.save_state(phy->pll);
+   phy->pll->state_saved = true;
+   }
+}
+
+int msm_dsi_phy_pll_restore_state(struct msm_dsi_phy *phy)
+{
+   

[Freedreno] [PATCH v4 10/24] drm/msm/dsi: push provided clocks handling into a generic code

2021-03-31 Thread Dmitry Baryshkov
All MSM DSI PHYs provide two clocks: byte and pixel ones.
Register/unregister provided clocks from the generic place, removing
boilerplate code from all MSM DSI PHY drivers.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/dsi.h | 11 +---
 drivers/gpu/drm/msm/dsi/dsi_host.c|  4 +-
 drivers/gpu/drm/msm/dsi/dsi_manager.c | 13 +---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c | 34 ++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h |  9 ++-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c| 55 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c| 53 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c| 63 +++
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   | 57 +++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c | 55 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.c | 16 +
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.h |  3 +-
 12 files changed, 78 insertions(+), 295 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi.h b/drivers/gpu/drm/msm/dsi/dsi.h
index b310cf344ed4..43590f338d20 100644
--- a/drivers/gpu/drm/msm/dsi/dsi.h
+++ b/drivers/gpu/drm/msm/dsi/dsi.h
@@ -96,19 +96,12 @@ struct drm_encoder *msm_dsi_get_encoder(struct msm_dsi 
*msm_dsi);
 struct msm_dsi_pll;
 #ifdef CONFIG_DRM_MSM_DSI_PLL
 void msm_dsi_pll_destroy(struct msm_dsi_pll *pll);
-int msm_dsi_pll_get_clk_provider(struct msm_dsi_pll *pll,
-   struct clk **byte_clk_provider, struct clk **pixel_clk_provider);
 void msm_dsi_pll_save_state(struct msm_dsi_pll *pll);
 int msm_dsi_pll_restore_state(struct msm_dsi_pll *pll);
 #else
 static inline void msm_dsi_pll_destroy(struct msm_dsi_pll *pll)
 {
 }
-static inline int msm_dsi_pll_get_clk_provider(struct msm_dsi_pll *pll,
-   struct clk **byte_clk_provider, struct clk **pixel_clk_provider)
-{
-   return -ENODEV;
-}
 static inline void msm_dsi_pll_save_state(struct msm_dsi_pll *pll)
 {
 }
@@ -144,7 +137,7 @@ struct drm_bridge *msm_dsi_host_get_bridge(struct 
mipi_dsi_host *host);
 int msm_dsi_host_register(struct mipi_dsi_host *host, bool check_defer);
 void msm_dsi_host_unregister(struct mipi_dsi_host *host);
 int msm_dsi_host_set_src_pll(struct mipi_dsi_host *host,
-   struct msm_dsi_pll *src_pll);
+   struct msm_dsi_phy *src_phy);
 void msm_dsi_host_reset_phy(struct mipi_dsi_host *host);
 void msm_dsi_host_get_phy_clk_req(struct mipi_dsi_host *host,
struct msm_dsi_phy_clk_request *clk_req,
@@ -196,6 +189,8 @@ void msm_dsi_phy_get_shared_timings(struct msm_dsi_phy *phy,
 struct msm_dsi_pll *msm_dsi_phy_get_pll(struct msm_dsi_phy *phy);
 void msm_dsi_phy_set_usecase(struct msm_dsi_phy *phy,
 enum msm_dsi_phy_usecase uc);
+int msm_dsi_phy_get_clk_provider(struct msm_dsi_phy *phy,
+   struct clk **byte_clk_provider, struct clk **pixel_clk_provider);
 
 #endif /* __DSI_CONNECTOR_H__ */
 
diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
b/drivers/gpu/drm/msm/dsi/dsi_host.c
index ab281cba0f08..41e1d0f7ab6e 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -2226,13 +2226,13 @@ void msm_dsi_host_cmd_xfer_commit(struct mipi_dsi_host 
*host, u32 dma_base,
 }
 
 int msm_dsi_host_set_src_pll(struct mipi_dsi_host *host,
-   struct msm_dsi_pll *src_pll)
+   struct msm_dsi_phy *src_phy)
 {
struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
struct clk *byte_clk_provider, *pixel_clk_provider;
int ret;
 
-   ret = msm_dsi_pll_get_clk_provider(src_pll,
+   ret = msm_dsi_phy_get_clk_provider(src_phy,
&byte_clk_provider, &pixel_clk_provider);
if (ret) {
pr_info("%s: can't get provider from pll, don't set parent\n",
diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c 
b/drivers/gpu/drm/msm/dsi/dsi_manager.c
index 1d28dfba2c9b..86e36be58701 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
@@ -70,7 +70,6 @@ static int dsi_mgr_setup_components(int id)
struct msm_dsi *other_dsi = dsi_mgr_get_other_dsi(id);
struct msm_dsi *clk_master_dsi = dsi_mgr_get_dsi(DSI_CLOCK_MASTER);
struct msm_dsi *clk_slave_dsi = dsi_mgr_get_dsi(DSI_CLOCK_SLAVE);
-   struct msm_dsi_pll *src_pll;
int ret;
 
if (!IS_DUAL_DSI()) {
@@ -79,10 +78,7 @@ static int dsi_mgr_setup_components(int id)
return ret;
 
msm_dsi_phy_set_usecase(msm_dsi->phy, MSM_DSI_PHY_STANDALONE);
-   src_pll = msm_dsi_phy_get_pll(msm_dsi->phy);
-   if (IS_ERR(src_pll))
-   return PTR_ERR(src_pll);
-   ret = msm_dsi_host_set_src_pll(msm_dsi->host, src_pll);
+   ret = msm_dsi_host_set_src_pll(msm_dsi->host, msm_dsi->phy);
} else if (!other_dsi) {

[Freedreno] [PATCH v4 07/24] drm/msm/dsi: drop global msm_dsi_phy_type enumaration

2021-03-31 Thread Dmitry Baryshkov
With the current upstream driver the msm_dsi_phy_type enum does not make
much sense: all DSI PHYs are probed using the dt bindings, the phy type
is not passed between drivers. Use quirks in phy individual PHY drivers
to differentiate minor harware differences and drop the enum.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/dsi.h | 12 ---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h |  4 
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c| 11 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c|  2 --
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c|  1 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c| 19 --
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   |  1 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c | 20 ++-
 8 files changed, 25 insertions(+), 45 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi.h b/drivers/gpu/drm/msm/dsi/dsi.h
index 21cf883fb6f1..98a4b296fa30 100644
--- a/drivers/gpu/drm/msm/dsi/dsi.h
+++ b/drivers/gpu/drm/msm/dsi/dsi.h
@@ -23,18 +23,6 @@
 struct msm_dsi_phy_shared_timings;
 struct msm_dsi_phy_clk_request;
 
-enum msm_dsi_phy_type {
-   MSM_DSI_PHY_28NM_HPM,
-   MSM_DSI_PHY_28NM_LP,
-   MSM_DSI_PHY_20NM,
-   MSM_DSI_PHY_28NM_8960,
-   MSM_DSI_PHY_14NM,
-   MSM_DSI_PHY_10NM,
-   MSM_DSI_PHY_7NM,
-   MSM_DSI_PHY_7NM_V4_1,
-   MSM_DSI_PHY_MAX
-};
-
 enum msm_dsi_phy_usecase {
MSM_DSI_PHY_STANDALONE,
MSM_DSI_PHY_MASTER,
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index 244d2c900d40..39abb86446f9 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -13,9 +13,6 @@
 #define dsi_phy_read(offset) msm_readl((offset))
 #define dsi_phy_write(offset, data) msm_writel((data), (offset))
 
-/* v3.0.0 10nm implementation that requires the old timings settings */
-#define V3_0_0_10NM_OLD_TIMINGS_QUIRK  BIT(0)
-
 struct msm_dsi_phy_ops {
int (*pll_init)(struct msm_dsi_phy *phy);
int (*enable)(struct msm_dsi_phy *phy, int src_pll_id,
@@ -37,7 +34,6 @@ struct msm_dsi_pll_ops {
 };
 
 struct msm_dsi_phy_cfg {
-   enum msm_dsi_phy_type type;
struct dsi_reg_config reg_cfg;
struct msm_dsi_phy_ops ops;
const struct msm_dsi_pll_ops pll_ops;
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index f697ff9a0d8e..dc8ccc994759 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -59,6 +59,9 @@ struct dsi_pll_regs {
u32 ssc_control;
 };
 
+/* v3.0.0 10nm implementation that requires the old timings settings */
+#define DSI_PHY_10NM_QUIRK_OLD_TIMINGS BIT(0)
+
 struct dsi_pll_config {
u32 ref_freq;
bool div_override;
@@ -915,7 +918,7 @@ static void dsi_phy_hw_v3_0_lane_settings(struct 
msm_dsi_phy *phy)
u8 tx_dctrl[] = { 0x00, 0x00, 0x00, 0x04, 0x01 };
void __iomem *lane_base = phy->lane_base;
 
-   if (phy->cfg->quirks & V3_0_0_10NM_OLD_TIMINGS_QUIRK)
+   if (phy->cfg->quirks & DSI_PHY_10NM_QUIRK_OLD_TIMINGS)
tx_dctrl[3] = 0x02;
 
/* Strength ctrl settings */
@@ -950,7 +953,7 @@ static void dsi_phy_hw_v3_0_lane_settings(struct 
msm_dsi_phy *phy)
  tx_dctrl[i]);
}
 
-   if (!(phy->cfg->quirks & V3_0_0_10NM_OLD_TIMINGS_QUIRK)) {
+   if (!(phy->cfg->quirks & DSI_PHY_10NM_QUIRK_OLD_TIMINGS)) {
/* Toggle BIT 0 to release freeze I/0 */
dsi_phy_write(lane_base + REG_DSI_10nm_PHY_LN_TX_DCTRL(3), 
0x05);
dsi_phy_write(lane_base + REG_DSI_10nm_PHY_LN_TX_DCTRL(3), 
0x04);
@@ -1090,7 +1093,6 @@ static void dsi_10nm_phy_disable(struct msm_dsi_phy *phy)
 }
 
 const struct msm_dsi_phy_cfg dsi_phy_10nm_cfgs = {
-   .type = MSM_DSI_PHY_10NM,
.src_pll_truthtable = { {false, false}, {true, false} },
.has_phy_lane = true,
.reg_cfg = {
@@ -1116,7 +1118,6 @@ const struct msm_dsi_phy_cfg dsi_phy_10nm_cfgs = {
 };
 
 const struct msm_dsi_phy_cfg dsi_phy_10nm_8998_cfgs = {
-   .type = MSM_DSI_PHY_10NM,
.src_pll_truthtable = { {false, false}, {true, false} },
.has_phy_lane = true,
.reg_cfg = {
@@ -1139,5 +1140,5 @@ const struct msm_dsi_phy_cfg dsi_phy_10nm_8998_cfgs = {
},
.io_start = { 0xc994400, 0xc996400 },
.num_dsi_phy = 2,
-   .quirks = V3_0_0_10NM_OLD_TIMINGS_QUIRK,
+   .quirks = DSI_PHY_10NM_QUIRK_OLD_TIMINGS,
 };
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
index 011d285bf2c0..d78f846cf8e4 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
@@ -1215,7 +1215,6 @@ static void dsi_14nm_phy_disable(struct msm_dsi_phy *phy)
 }
 
 const 

[Freedreno] [PATCH v4 09/24] drm/msm/dsi: remove msm_dsi_pll_set_usecase

2021-03-31 Thread Dmitry Baryshkov
msm_dsi_pll_set_usecase() function is not used outside of individual DSI
PHY drivers, so drop it in favour of calling the the respective
set_usecase functions directly.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/dsi.h  | 7 ---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h  | 2 --
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c | 4 +---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c | 4 +---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c  | 4 +---
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.c  | 9 -
 6 files changed, 3 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi.h b/drivers/gpu/drm/msm/dsi/dsi.h
index 98a4b296fa30..b310cf344ed4 100644
--- a/drivers/gpu/drm/msm/dsi/dsi.h
+++ b/drivers/gpu/drm/msm/dsi/dsi.h
@@ -100,8 +100,6 @@ int msm_dsi_pll_get_clk_provider(struct msm_dsi_pll *pll,
struct clk **byte_clk_provider, struct clk **pixel_clk_provider);
 void msm_dsi_pll_save_state(struct msm_dsi_pll *pll);
 int msm_dsi_pll_restore_state(struct msm_dsi_pll *pll);
-int msm_dsi_pll_set_usecase(struct msm_dsi_pll *pll,
-   enum msm_dsi_phy_usecase uc);
 #else
 static inline void msm_dsi_pll_destroy(struct msm_dsi_pll *pll)
 {
@@ -118,11 +116,6 @@ static inline int msm_dsi_pll_restore_state(struct 
msm_dsi_pll *pll)
 {
return 0;
 }
-static inline int msm_dsi_pll_set_usecase(struct msm_dsi_pll *pll,
- enum msm_dsi_phy_usecase uc)
-{
-   return -ENODEV;
-}
 #endif
 
 /* dsi host */
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index 000e4207dabc..f737bef74b91 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -29,8 +29,6 @@ struct msm_dsi_pll_ops {
void (*destroy)(struct msm_dsi_pll *pll);
void (*save_state)(struct msm_dsi_pll *pll);
int (*restore_state)(struct msm_dsi_pll *pll);
-   int (*set_usecase)(struct msm_dsi_pll *pll,
-  enum msm_dsi_phy_usecase uc);
 };
 
 struct msm_dsi_phy_cfg {
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index 5f9d0cfc4e03..7a98e420414f 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -1049,7 +1049,7 @@ static int dsi_10nm_phy_enable(struct msm_dsi_phy *phy, 
int src_pll_id,
/* Select full-rate mode */
dsi_phy_write(base + REG_DSI_10nm_PHY_CMN_CTRL_2, 0x40);
 
-   ret = msm_dsi_pll_set_usecase(phy->pll, phy->usecase);
+   ret = dsi_pll_10nm_set_usecase(phy->pll, phy->usecase);
if (ret) {
DRM_DEV_ERROR(&phy->pdev->dev, "%s: set pll usecase failed, 
%d\n",
__func__, ret);
@@ -1109,7 +1109,6 @@ const struct msm_dsi_phy_cfg dsi_phy_10nm_cfgs = {
.destroy = dsi_pll_10nm_destroy,
.save_state = dsi_pll_10nm_save_state,
.restore_state = dsi_pll_10nm_restore_state,
-   .set_usecase = dsi_pll_10nm_set_usecase,
},
.min_pll_rate = 10UL,
.max_pll_rate = 35UL,
@@ -1136,7 +1135,6 @@ const struct msm_dsi_phy_cfg dsi_phy_10nm_8998_cfgs = {
.destroy = dsi_pll_10nm_destroy,
.save_state = dsi_pll_10nm_save_state,
.restore_state = dsi_pll_10nm_restore_state,
-   .set_usecase = dsi_pll_10nm_set_usecase,
},
.min_pll_rate = 10UL,
.max_pll_rate = 35UL,
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
index 8e4528301e5d..bab86fa6dc4b 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
@@ -1190,7 +1190,7 @@ static int dsi_14nm_phy_enable(struct msm_dsi_phy *phy, 
int src_pll_id,
REG_DSI_14nm_PHY_CMN_GLBL_TEST_CTRL,
DSI_14nm_PHY_CMN_GLBL_TEST_CTRL_BITCLK_HS_SEL);
 
-   ret = msm_dsi_pll_set_usecase(phy->pll, phy->usecase);
+   ret = dsi_pll_14nm_set_usecase(phy->pll, phy->usecase);
if (ret) {
DRM_DEV_ERROR(&phy->pdev->dev, "%s: set pll usecase failed, 
%d\n",
__func__, ret);
@@ -1231,7 +1231,6 @@ const struct msm_dsi_phy_cfg dsi_phy_14nm_cfgs = {
.destroy = dsi_pll_14nm_destroy,
.save_state = dsi_pll_14nm_save_state,
.restore_state = dsi_pll_14nm_restore_state,
-   .set_usecase = dsi_pll_14nm_set_usecase,
.disable_seq = dsi_pll_14nm_disable_seq,
.enable_seq = dsi_pll_14nm_enable_seq,
},
@@ -1260,7 +1259,6 @@ const struct msm_dsi_phy_cfg dsi_phy_14nm_660_cfgs = {
.destroy = dsi_pll_14nm_destroy,
.save_state = dsi_pll_14nm_save_state,
   

[Freedreno] [PATCH v4 11/24] drm/msm/dsi: use devm_clk_*register to registe DSI PHY clocks

2021-03-31 Thread Dmitry Baryshkov
Use devres-enabled version of clock registration functions. This lets us
remove dsi_pll destroy callbacks completely.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/dsi.h |  4 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c |  2 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h |  1 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c| 84 ---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c| 35 +---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c| 50 +--
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   | 39 +++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c | 84 ---
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.c | 17 
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.h |  4 -
 10 files changed, 71 insertions(+), 249 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi.h b/drivers/gpu/drm/msm/dsi/dsi.h
index 43590f338d20..0970f05cd47f 100644
--- a/drivers/gpu/drm/msm/dsi/dsi.h
+++ b/drivers/gpu/drm/msm/dsi/dsi.h
@@ -95,13 +95,9 @@ struct drm_encoder *msm_dsi_get_encoder(struct msm_dsi 
*msm_dsi);
 /* dsi pll */
 struct msm_dsi_pll;
 #ifdef CONFIG_DRM_MSM_DSI_PLL
-void msm_dsi_pll_destroy(struct msm_dsi_pll *pll);
 void msm_dsi_pll_save_state(struct msm_dsi_pll *pll);
 int msm_dsi_pll_restore_state(struct msm_dsi_pll *pll);
 #else
-static inline void msm_dsi_pll_destroy(struct msm_dsi_pll *pll)
-{
-}
 static inline void msm_dsi_pll_save_state(struct msm_dsi_pll *pll)
 {
 }
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index 69214447f757..df3b91b0ea88 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -744,7 +744,6 @@ static int dsi_phy_driver_probe(struct platform_device 
*pdev)
 fail:
if (phy->pll) {
of_clk_del_provider(dev->of_node);
-   msm_dsi_pll_destroy(phy->pll);
phy->pll = NULL;
}
 
@@ -757,7 +756,6 @@ static int dsi_phy_driver_remove(struct platform_device 
*pdev)
 
if (phy && phy->pll) {
of_clk_del_provider(pdev->dev.of_node);
-   msm_dsi_pll_destroy(phy->pll);
phy->pll = NULL;
}
 
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index c3099629fa3b..2c5196844ba9 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -23,7 +23,6 @@ struct msm_dsi_phy_ops {
 struct msm_dsi_pll_ops {
int (*enable_seq)(struct msm_dsi_pll *pll);
void (*disable_seq)(struct msm_dsi_pll *pll);
-   void (*destroy)(struct msm_dsi_pll *pll);
void (*save_state)(struct msm_dsi_pll *pll);
int (*restore_state)(struct msm_dsi_pll *pll);
 };
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index 8666da1c29e5..6300b92c65eb 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -103,15 +103,6 @@ struct dsi_pll_10nm {
struct dsi_pll_config pll_configuration;
struct dsi_pll_regs reg_setup;
 
-   /* private clocks: */
-   struct clk_hw *out_div_clk_hw;
-   struct clk_hw *bit_clk_hw;
-   struct clk_hw *byte_clk_hw;
-   struct clk_hw *by_2_bit_clk_hw;
-   struct clk_hw *post_out_div_clk_hw;
-   struct clk_hw *pclk_mux_hw;
-   struct clk_hw *out_dsiclk_hw;
-
struct pll_10nm_cached_state cached_state;
 
enum msm_dsi_phy_usecase uc;
@@ -614,22 +605,6 @@ static int dsi_pll_10nm_set_usecase(struct msm_dsi_pll 
*pll,
return 0;
 }
 
-static void dsi_pll_10nm_destroy(struct msm_dsi_pll *pll)
-{
-   struct dsi_pll_10nm *pll_10nm = to_pll_10nm(pll);
-
-   DBG("DSI PLL%d", pll_10nm->id);
-
-   clk_hw_unregister_divider(pll_10nm->out_dsiclk_hw);
-   clk_hw_unregister_mux(pll_10nm->pclk_mux_hw);
-   clk_hw_unregister_fixed_factor(pll_10nm->post_out_div_clk_hw);
-   clk_hw_unregister_fixed_factor(pll_10nm->by_2_bit_clk_hw);
-   clk_hw_unregister_fixed_factor(pll_10nm->byte_clk_hw);
-   clk_hw_unregister_divider(pll_10nm->bit_clk_hw);
-   clk_hw_unregister_divider(pll_10nm->out_div_clk_hw);
-   clk_hw_unregister(&pll_10nm->base.clk_hw);
-}
-
 /*
  * The post dividers and mux clocks are created using the standard divider and
  * mux API. Unlike the 14nm PHY, the slave PLL doesn't need its dividers/mux
@@ -656,30 +631,28 @@ static int pll_10nm_register(struct dsi_pll_10nm 
*pll_10nm, struct clk_hw **prov
snprintf(vco_name, 32, "dsi%dvco_clk", pll_10nm->id);
pll_10nm->base.clk_hw.init = &vco_init;
 
-   ret = clk_hw_register(dev, &pll_10nm->base.clk_hw);
+   ret = devm_clk_hw_register(dev, &pll_10nm->base.clk_hw);
if (ret)
return ret;
 
snprintf(clk_name, 32, "dsi%d_pll_out_div_clk", pll_10nm->id);
snprint

[Freedreno] [PATCH v4 03/24] drm/msm/dsi: replace PHY's init callback with configurable data

2021-03-31 Thread Dmitry Baryshkov
DSI PHY init callback would either map dsi_phy_regulator or dsi_phy_lane
depending on the PHY type. Replace those callbacks with configuration
options governing mapping those regions.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c | 42 ---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h |  4 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c| 19 +
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c| 19 +
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c|  2 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c|  6 +--
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   |  2 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c | 19 +
 8 files changed, 31 insertions(+), 82 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index e8c1a727179c..83eb0a630443 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -637,24 +637,6 @@ static int dsi_phy_get_id(struct msm_dsi_phy *phy)
return -EINVAL;
 }
 
-int msm_dsi_phy_init_common(struct msm_dsi_phy *phy)
-{
-   struct platform_device *pdev = phy->pdev;
-   int ret = 0;
-
-   phy->reg_base = msm_ioremap(pdev, "dsi_phy_regulator",
-   "DSI_PHY_REG");
-   if (IS_ERR(phy->reg_base)) {
-   DRM_DEV_ERROR(&pdev->dev, "%s: failed to map phy regulator 
base\n",
-   __func__);
-   ret = -ENOMEM;
-   goto fail;
-   }
-
-fail:
-   return ret;
-}
-
 static int dsi_phy_driver_probe(struct platform_device *pdev)
 {
struct msm_dsi_phy *phy;
@@ -691,6 +673,24 @@ static int dsi_phy_driver_probe(struct platform_device 
*pdev)
goto fail;
}
 
+   if (phy->cfg->has_phy_lane) {
+   phy->lane_base = msm_ioremap(pdev, "dsi_phy_lane", 
"DSI_PHY_LANE");
+   if (IS_ERR(phy->lane_base)) {
+   DRM_DEV_ERROR(&pdev->dev, "%s: failed to map phy lane 
base\n", __func__);
+   ret = -ENOMEM;
+   goto fail;
+   }
+   }
+
+   if (phy->cfg->has_phy_regulator) {
+   phy->reg_base = msm_ioremap(pdev, "dsi_phy_regulator", 
"DSI_PHY_REG");
+   if (IS_ERR(phy->reg_base)) {
+   DRM_DEV_ERROR(&pdev->dev, "%s: failed to map phy 
regulator base\n", __func__);
+   ret = -ENOMEM;
+   goto fail;
+   }
+   }
+
ret = dsi_phy_regulator_init(phy);
if (ret)
goto fail;
@@ -702,12 +702,6 @@ static int dsi_phy_driver_probe(struct platform_device 
*pdev)
goto fail;
}
 
-   if (phy->cfg->ops.init) {
-   ret = phy->cfg->ops.init(phy);
-   if (ret)
-   goto fail;
-   }
-
/* PLL init will call into clk_register which requires
 * register access, so we need to enable power and ahb clock.
 */
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index d2bd74b6f357..03dfb08e7128 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -17,7 +17,6 @@
 #define V3_0_0_10NM_OLD_TIMINGS_QUIRK  BIT(0)
 
 struct msm_dsi_phy_ops {
-   int (*init) (struct msm_dsi_phy *phy);
int (*enable)(struct msm_dsi_phy *phy, int src_pll_id,
struct msm_dsi_phy_clk_request *clk_req);
void (*disable)(struct msm_dsi_phy *phy);
@@ -37,6 +36,8 @@ struct msm_dsi_phy_cfg {
const resource_size_t io_start[DSI_MAX];
const int num_dsi_phy;
const int quirks;
+   bool has_phy_regulator;
+   bool has_phy_lane;
 };
 
 extern const struct msm_dsi_phy_cfg dsi_phy_28nm_hpm_cfgs;
@@ -106,7 +107,6 @@ int msm_dsi_dphy_timing_calc_v4(struct msm_dsi_dphy_timing 
*timing,
struct msm_dsi_phy_clk_request *clk_req);
 void msm_dsi_phy_set_src_pll(struct msm_dsi_phy *phy, int pll_id, u32 reg,
u32 bit_mask);
-int msm_dsi_phy_init_common(struct msm_dsi_phy *phy);
 
 #endif /* __DSI_PHY_H__ */
 
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index d1b92d4dc197..655fa17a0452 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -216,24 +216,10 @@ static void dsi_10nm_phy_disable(struct msm_dsi_phy *phy)
DBG("DSI%d PHY disabled", phy->id);
 }
 
-static int dsi_10nm_phy_init(struct msm_dsi_phy *phy)
-{
-   struct platform_device *pdev = phy->pdev;
-
-   phy->lane_base = msm_ioremap(pdev, "dsi_phy_lane",
-"DSI_PHY_LANE");
-   if (IS_ERR(phy->lane_base)) {
-   DRM_DEV_ERROR(&pdev->dev, "%s: fail

[Freedreno] [PATCH v4 14/24] drm/msm/dsi: drop vco_delay setting from 7nm, 10nm, 14nm drivers

2021-03-31 Thread Dmitry Baryshkov
These drivers do not use vco_delay variable, so drop it from all of
them.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c | 3 ---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c | 4 
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c  | 3 ---
 3 files changed, 10 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index d81cea661f5c..91ae0f8dbd88 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -99,7 +99,6 @@ struct dsi_pll_10nm {
/* protects REG_DSI_10nm_PHY_CMN_CLK_CFG0 register */
spinlock_t postdiv_lock;
 
-   int vco_delay;
struct dsi_pll_config pll_configuration;
struct dsi_pll_regs reg_setup;
 
@@ -771,8 +770,6 @@ static int dsi_pll_10nm_init(struct msm_dsi_phy *phy)
pll = &pll_10nm->base;
pll->cfg = phy->cfg;
 
-   pll_10nm->vco_delay = 1;
-
ret = pll_10nm_register(pll_10nm, phy->provided_clocks->hws);
if (ret) {
DRM_DEV_ERROR(&pdev->dev, "failed to register PLL: %d\n", ret);
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
index 7fe7c8348b42..434d02ffa7fe 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
@@ -122,8 +122,6 @@ struct dsi_pll_14nm {
void __iomem *phy_cmn_mmio;
void __iomem *mmio;
 
-   int vco_delay;
-
struct dsi_pll_input in;
struct dsi_pll_output out;
 
@@ -1012,8 +1010,6 @@ static int dsi_pll_14nm_init(struct msm_dsi_phy *phy)
pll = &pll_14nm->base;
pll->cfg = phy->cfg;
 
-   pll_14nm->vco_delay = 1;
-
ret = pll_14nm_register(pll_14nm, phy->provided_clocks->hws);
if (ret) {
DRM_DEV_ERROR(&pdev->dev, "failed to register PLL: %d\n", ret);
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
index d725ceb0b90c..321d23b3ed18 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
@@ -99,7 +99,6 @@ struct dsi_pll_7nm {
/* protects REG_DSI_7nm_PHY_CMN_CLK_CFG0 register */
spinlock_t postdiv_lock;
 
-   int vco_delay;
struct dsi_pll_config pll_configuration;
struct dsi_pll_regs reg_setup;
 
@@ -796,8 +795,6 @@ static int dsi_pll_7nm_init(struct msm_dsi_phy *phy)
pll = &pll_7nm->base;
pll->cfg = phy->cfg;
 
-   pll_7nm->vco_delay = 1;
-
ret = pll_7nm_register(pll_7nm, phy->provided_clocks->hws);
if (ret) {
DRM_DEV_ERROR(&pdev->dev, "failed to register PLL: %d\n", ret);
-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH v4 16/24] drm/msi/dsi: inline msm_dsi_pll_helper_clk_prepare/unprepare

2021-03-31 Thread Dmitry Baryshkov
10nm and 7nm already do not use these helpers, as they handle setting
slave DSI clocks after enabling VCO. Modify the rest of PHY drivers to
remove unnecessary indirection and drop enable_seq/disable_seq PLL
callbacks.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h |   2 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c|  87 +++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c|  86 -
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   | 120 ++
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.c |  35 -
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.h |   2 -
 6 files changed, 171 insertions(+), 161 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index 8133732e0c7f..b477d21804c8 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -32,8 +32,6 @@ struct msm_dsi_phy_ops {
 };
 
 struct msm_dsi_pll_ops {
-   int (*enable_seq)(struct msm_dsi_pll *pll);
-   void (*disable_seq)(struct msm_dsi_pll *pll);
void (*save_state)(struct msm_dsi_pll *pll);
int (*restore_state)(struct msm_dsi_pll *pll);
 };
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
index 434d02ffa7fe..91c5bb2fd169 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
@@ -652,12 +652,58 @@ static unsigned long dsi_pll_14nm_vco_recalc_rate(struct 
clk_hw *hw,
return (unsigned long)vco_rate;
 }
 
+static int dsi_pll_14nm_vco_prepare(struct clk_hw *hw)
+{
+   struct msm_dsi_pll *pll = hw_clk_to_pll(hw);
+   struct dsi_pll_14nm *pll_14nm = to_pll_14nm(pll);
+   void __iomem *base = pll_14nm->mmio;
+   void __iomem *cmn_base = pll_14nm->phy_cmn_mmio;
+   bool locked;
+
+   DBG("");
+
+   if (unlikely(pll->pll_on))
+   return 0;
+
+   pll_write(base + REG_DSI_14nm_PHY_PLL_VREF_CFG1, 0x10);
+   pll_write(cmn_base + REG_DSI_14nm_PHY_CMN_PLL_CNTRL, 1);
+
+   locked = pll_14nm_poll_for_ready(pll_14nm, POLL_MAX_READS,
+POLL_TIMEOUT_US);
+
+   if (unlikely(!locked)) {
+   DRM_DEV_ERROR(&pll_14nm->pdev->dev, "DSI PLL lock failed\n");
+   return -EINVAL;
+   }
+
+   DBG("DSI PLL lock success");
+   pll->pll_on = true;
+
+   return 0;
+}
+
+static void dsi_pll_14nm_vco_unprepare(struct clk_hw *hw)
+{
+   struct msm_dsi_pll *pll = hw_clk_to_pll(hw);
+   struct dsi_pll_14nm *pll_14nm = to_pll_14nm(pll);
+   void __iomem *cmn_base = pll_14nm->phy_cmn_mmio;
+
+   DBG("");
+
+   if (unlikely(!pll->pll_on))
+   return;
+
+   pll_write(cmn_base + REG_DSI_14nm_PHY_CMN_PLL_CNTRL, 0);
+
+   pll->pll_on = false;
+}
+
 static const struct clk_ops clk_ops_dsi_pll_14nm_vco = {
.round_rate = msm_dsi_pll_helper_clk_round_rate,
.set_rate = dsi_pll_14nm_vco_set_rate,
.recalc_rate = dsi_pll_14nm_vco_recalc_rate,
-   .prepare = msm_dsi_pll_helper_clk_prepare,
-   .unprepare = msm_dsi_pll_helper_clk_unprepare,
+   .prepare = dsi_pll_14nm_vco_prepare,
+   .unprepare = dsi_pll_14nm_vco_unprepare,
 };
 
 /*
@@ -749,39 +795,6 @@ static const struct clk_ops clk_ops_dsi_pll_14nm_postdiv = 
{
  * PLL Callbacks
  */
 
-static int dsi_pll_14nm_enable_seq(struct msm_dsi_pll *pll)
-{
-   struct dsi_pll_14nm *pll_14nm = to_pll_14nm(pll);
-   void __iomem *base = pll_14nm->mmio;
-   void __iomem *cmn_base = pll_14nm->phy_cmn_mmio;
-   bool locked;
-
-   DBG("");
-
-   pll_write(base + REG_DSI_14nm_PHY_PLL_VREF_CFG1, 0x10);
-   pll_write(cmn_base + REG_DSI_14nm_PHY_CMN_PLL_CNTRL, 1);
-
-   locked = pll_14nm_poll_for_ready(pll_14nm, POLL_MAX_READS,
-POLL_TIMEOUT_US);
-
-   if (unlikely(!locked))
-   DRM_DEV_ERROR(&pll_14nm->pdev->dev, "DSI PLL lock failed\n");
-   else
-   DBG("DSI PLL lock success");
-
-   return locked ? 0 : -EINVAL;
-}
-
-static void dsi_pll_14nm_disable_seq(struct msm_dsi_pll *pll)
-{
-   struct dsi_pll_14nm *pll_14nm = to_pll_14nm(pll);
-   void __iomem *cmn_base = pll_14nm->phy_cmn_mmio;
-
-   DBG("");
-
-   pll_write(cmn_base + REG_DSI_14nm_PHY_CMN_PLL_CNTRL, 0);
-}
-
 static void dsi_pll_14nm_save_state(struct msm_dsi_pll *pll)
 {
struct dsi_pll_14nm *pll_14nm = to_pll_14nm(pll);
@@ -1157,8 +1170,6 @@ const struct msm_dsi_phy_cfg dsi_phy_14nm_cfgs = {
.pll_ops = {
.save_state = dsi_pll_14nm_save_state,
.restore_state = dsi_pll_14nm_restore_state,
-   .disable_seq = dsi_pll_14nm_disable_seq,
-   .enable_seq = dsi_pll_14nm_enable_seq,
},
.min_pll_rate = VCO_MIN_RATE,
.max_pll_rate

[Freedreno] [PATCH v4 08/24] drm/msm/dsi: move min/max PLL rate to phy config

2021-03-31 Thread Dmitry Baryshkov
Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h   |  3 +++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c  |  6 --
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c  |  6 --
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c  |  8 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c |  4 ++--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c   | 12 
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.c   |  8 
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.h   |  3 ---
 8 files changed, 27 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index 39abb86446f9..000e4207dabc 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -38,6 +38,9 @@ struct msm_dsi_phy_cfg {
struct msm_dsi_phy_ops ops;
const struct msm_dsi_pll_ops pll_ops;
 
+   unsigned long   min_pll_rate;
+   unsigned long   max_pll_rate;
+
/*
 * Each cell {phy_id, pll_id} of the truth table indicates
 * if the source PLL selection bit should be set for each PHY.
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index dc8ccc994759..5f9d0cfc4e03 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -864,8 +864,6 @@ static int dsi_pll_10nm_init(struct msm_dsi_phy *phy)
spin_lock_init(&pll_10nm->postdiv_lock);
 
pll = &pll_10nm->base;
-   pll->min_rate = 10UL;
-   pll->max_rate = 35UL;
pll->cfg = phy->cfg;
 
pll_10nm->vco_delay = 1;
@@ -1113,6 +,8 @@ const struct msm_dsi_phy_cfg dsi_phy_10nm_cfgs = {
.restore_state = dsi_pll_10nm_restore_state,
.set_usecase = dsi_pll_10nm_set_usecase,
},
+   .min_pll_rate = 10UL,
+   .max_pll_rate = 35UL,
.io_start = { 0xae94400, 0xae96400 },
.num_dsi_phy = 2,
 };
@@ -1138,6 +1138,8 @@ const struct msm_dsi_phy_cfg dsi_phy_10nm_8998_cfgs = {
.restore_state = dsi_pll_10nm_restore_state,
.set_usecase = dsi_pll_10nm_set_usecase,
},
+   .min_pll_rate = 10UL,
+   .max_pll_rate = 35UL,
.io_start = { 0xc994400, 0xc996400 },
.num_dsi_phy = 2,
.quirks = DSI_PHY_10NM_QUIRK_OLD_TIMINGS,
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
index d78f846cf8e4..8e4528301e5d 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
@@ -1078,8 +1078,6 @@ static int dsi_pll_14nm_init(struct msm_dsi_phy *phy)
spin_lock_init(&pll_14nm->postdiv_lock);
 
pll = &pll_14nm->base;
-   pll->min_rate = VCO_MIN_RATE;
-   pll->max_rate = VCO_MAX_RATE;
pll->cfg = phy->cfg;
 
pll_14nm->vco_delay = 1;
@@ -1237,6 +1235,8 @@ const struct msm_dsi_phy_cfg dsi_phy_14nm_cfgs = {
.disable_seq = dsi_pll_14nm_disable_seq,
.enable_seq = dsi_pll_14nm_enable_seq,
},
+   .min_pll_rate = VCO_MIN_RATE,
+   .max_pll_rate = VCO_MAX_RATE,
.io_start = { 0x994400, 0x996400 },
.num_dsi_phy = 2,
 };
@@ -1264,6 +1264,8 @@ const struct msm_dsi_phy_cfg dsi_phy_14nm_660_cfgs = {
.disable_seq = dsi_pll_14nm_disable_seq,
.enable_seq = dsi_pll_14nm_enable_seq,
},
+   .min_pll_rate = VCO_MIN_RATE,
+   .max_pll_rate = VCO_MAX_RATE,
.io_start = { 0xc994400, 0xc996000 },
.num_dsi_phy = 2,
 };
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
index bb33261d606d..d267b25e5da0 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
@@ -625,8 +625,6 @@ static int dsi_pll_28nm_init(struct msm_dsi_phy *phy)
}
 
pll = &pll_28nm->base;
-   pll->min_rate = VCO_MIN_RATE;
-   pll->max_rate = VCO_MAX_RATE;
if (phy->cfg->quirks & DSI_PHY_28NM_QUIRK_PHY_LP)
pll_28nm->vco_delay = 1000;
else
@@ -811,6 +809,8 @@ const struct msm_dsi_phy_cfg dsi_phy_28nm_hpm_cfgs = {
.disable_seq = dsi_pll_28nm_disable_seq,
.enable_seq = dsi_pll_28nm_enable_seq_hpm,
},
+   .min_pll_rate = VCO_MIN_RATE,
+   .max_pll_rate = VCO_MAX_RATE,
.io_start = { 0xfd922b00, 0xfd923100 },
.num_dsi_phy = 2,
 };
@@ -837,6 +837,8 @@ const struct msm_dsi_phy_cfg dsi_phy_28nm_hpm_famb_cfgs = {
.disable_seq = dsi_pll_28nm_disable_seq,
.enable_seq = dsi_pll_28nm_enable_seq_hpm,
},
+   .min_pll_rate = VCO_MIN_RATE,
+   .max_pll_rate = VCO_MAX_RATE,
.io_start = { 0x1a94400, 0x1a96400 },
.n

[Freedreno] [PATCH v4 05/24] drm/msm/dsi: drop multiple pll enable_seq support

2021-03-31 Thread Dmitry Baryshkov
The only PLL using multiple enable sequences is the 28nm PLL, which just
does the single step in the loop. Push that support back into the PLL
code.

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Tested-by: Stephen Boyd  # on sc7180 lazor
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c|  3 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c| 23 +--
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c   |  3 +-
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.c | 65 +++
 drivers/gpu/drm/msm/dsi/phy/dsi_pll.h |  4 +-
 5 files changed, 42 insertions(+), 56 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
index 6a63901da7a4..4386edfa91fe 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
@@ -1087,8 +1087,7 @@ struct msm_dsi_pll *msm_dsi_pll_14nm_init(struct 
platform_device *pdev, int id)
 
pll_14nm->vco_delay = 1;
 
-   pll->en_seq_cnt = 1;
-   pll->enable_seqs[0] = dsi_pll_14nm_enable_seq;
+   pll->enable_seq = dsi_pll_14nm_enable_seq;
 
ret = pll_14nm_register(pll_14nm);
if (ret) {
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
index 2f502efa4dd5..760cf7956fa2 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
@@ -311,7 +311,7 @@ static const struct clk_ops clk_ops_dsi_pll_28nm_vco = {
 /*
  * PLL Callbacks
  */
-static int dsi_pll_28nm_enable_seq_hpm(struct msm_dsi_pll *pll)
+static int _dsi_pll_28nm_enable_seq_hpm(struct msm_dsi_pll *pll)
 {
struct dsi_pll_28nm *pll_28nm = to_pll_28nm(pll);
struct device *dev = &pll_28nm->pdev->dev;
@@ -386,6 +386,19 @@ static int dsi_pll_28nm_enable_seq_hpm(struct msm_dsi_pll 
*pll)
return locked ? 0 : -EINVAL;
 }
 
+static int dsi_pll_28nm_enable_seq_hpm(struct msm_dsi_pll *pll)
+{
+   int i, ret;
+
+   for (i = 0; i < 3; i++) {
+   ret = _dsi_pll_28nm_enable_seq_hpm(pll);
+   if (!ret)
+   return 0;
+   }
+
+   return ret;
+}
+
 static int dsi_pll_28nm_enable_seq_lp(struct msm_dsi_pll *pll)
 {
struct dsi_pll_28nm *pll_28nm = to_pll_28nm(pll);
@@ -619,15 +632,11 @@ struct msm_dsi_pll *msm_dsi_pll_28nm_init(struct 
platform_device *pdev,
if (type == MSM_DSI_PHY_28NM_HPM) {
pll_28nm->vco_delay = 1;
 
-   pll->en_seq_cnt = 3;
-   pll->enable_seqs[0] = dsi_pll_28nm_enable_seq_hpm;
-   pll->enable_seqs[1] = dsi_pll_28nm_enable_seq_hpm;
-   pll->enable_seqs[2] = dsi_pll_28nm_enable_seq_hpm;
+   pll->enable_seq = dsi_pll_28nm_enable_seq_hpm;
} else if (type == MSM_DSI_PHY_28NM_LP) {
pll_28nm->vco_delay = 1000;
 
-   pll->en_seq_cnt = 1;
-   pll->enable_seqs[0] = dsi_pll_28nm_enable_seq_lp;
+   pll->enable_seq = dsi_pll_28nm_enable_seq_lp;
} else {
DRM_DEV_ERROR(&pdev->dev, "phy type (%d) is not 28nm\n", type);
return ERR_PTR(-EINVAL);
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
index 4a40513057e8..2cfb7edf91d8 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
@@ -515,8 +515,7 @@ struct msm_dsi_pll *msm_dsi_pll_28nm_8960_init(struct 
platform_device *pdev,
pll->save_state = dsi_pll_28nm_save_state;
pll->restore_state = dsi_pll_28nm_restore_state;
 
-   pll->en_seq_cnt = 1;
-   pll->enable_seqs[0] = dsi_pll_28nm_enable_seq;
+   pll->enable_seq = dsi_pll_28nm_enable_seq;
 
ret = pll_28nm_register(pll_28nm);
if (ret) {
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_pll.c 
b/drivers/gpu/drm/msm/dsi/phy/dsi_pll.c
index 3dc65877fa10..9e9fa90bf504 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_pll.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_pll.c
@@ -5,46 +5,6 @@
 
 #include "dsi_pll.h"
 
-static int dsi_pll_enable(struct msm_dsi_pll *pll)
-{
-   int i, ret = 0;
-
-   /*
-* Certain PLLs do not allow VCO rate update when it is on.
-* Keep track of their status to turn on/off after set rate success.
-*/
-   if (unlikely(pll->pll_on))
-   return 0;
-
-   /* Try all enable sequences until one succeeds */
-   for (i = 0; i < pll->en_seq_cnt; i++) {
-   ret = pll->enable_seqs[i](pll);
-   DBG("DSI PLL %s after sequence #%d",
-   ret ? "unlocked" : "locked", i + 1);
-   if (!ret)
-   break;
-   }
-
-   if (ret) {
-   DRM_ERROR("DSI PLL failed to lock\n");
-   return ret;
-   }
-
-   pll->pll_on = true;
-
-   return 0;
-}
-
-static void dsi_pll_disable(struct msm_dsi_pll *pll)
-{
-   i

[Freedreno] [PATCH v4 01/24] clk: mux: provide devm_clk_hw_register_mux()

2021-03-31 Thread Dmitry Baryshkov
Add devm_clk_hw_register_mux() - devres-managed version of
clk_hw_register_mux().

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Acked-by: Stephen Boyd 
---
 drivers/clk/clk-mux.c| 35 +++
 include/linux/clk-provider.h | 13 +
 2 files changed, 48 insertions(+)

diff --git a/drivers/clk/clk-mux.c b/drivers/clk/clk-mux.c
index e54e79714818..20582aae7a35 100644
--- a/drivers/clk/clk-mux.c
+++ b/drivers/clk/clk-mux.c
@@ -8,6 +8,7 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -206,6 +207,40 @@ struct clk_hw *__clk_hw_register_mux(struct device *dev, 
struct device_node *np,
 }
 EXPORT_SYMBOL_GPL(__clk_hw_register_mux);
 
+static void devm_clk_hw_release_mux(struct device *dev, void *res)
+{
+   clk_hw_unregister_mux(*(struct clk_hw **)res);
+}
+
+struct clk_hw *__devm_clk_hw_register_mux(struct device *dev, struct 
device_node *np,
+   const char *name, u8 num_parents,
+   const char * const *parent_names,
+   const struct clk_hw **parent_hws,
+   const struct clk_parent_data *parent_data,
+   unsigned long flags, void __iomem *reg, u8 shift, u32 mask,
+   u8 clk_mux_flags, u32 *table, spinlock_t *lock)
+{
+   struct clk_hw **ptr, *hw;
+
+   ptr = devres_alloc(devm_clk_hw_release_mux, sizeof(*ptr), GFP_KERNEL);
+   if (!ptr)
+   return ERR_PTR(-ENOMEM);
+
+   hw = __clk_hw_register_mux(dev, np, name, num_parents, parent_names, 
parent_hws,
+  parent_data, flags, reg, shift, mask,
+  clk_mux_flags, table, lock);
+
+   if (!IS_ERR(hw)) {
+   *ptr = hw;
+   devres_add(dev, ptr);
+   } else {
+   devres_free(ptr);
+   }
+
+   return hw;
+}
+EXPORT_SYMBOL_GPL(__devm_clk_hw_register_mux);
+
 struct clk *clk_register_mux_table(struct device *dev, const char *name,
const char * const *parent_names, u8 num_parents,
unsigned long flags, void __iomem *reg, u8 shift, u32 mask,
diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
index e4316890661a..9cf7ecc62f7c 100644
--- a/include/linux/clk-provider.h
+++ b/include/linux/clk-provider.h
@@ -868,6 +868,13 @@ struct clk_hw *__clk_hw_register_mux(struct device *dev, 
struct device_node *np,
const struct clk_parent_data *parent_data,
unsigned long flags, void __iomem *reg, u8 shift, u32 mask,
u8 clk_mux_flags, u32 *table, spinlock_t *lock);
+struct clk_hw *__devm_clk_hw_register_mux(struct device *dev, struct 
device_node *np,
+   const char *name, u8 num_parents,
+   const char * const *parent_names,
+   const struct clk_hw **parent_hws,
+   const struct clk_parent_data *parent_data,
+   unsigned long flags, void __iomem *reg, u8 shift, u32 mask,
+   u8 clk_mux_flags, u32 *table, spinlock_t *lock);
 struct clk *clk_register_mux_table(struct device *dev, const char *name,
const char * const *parent_names, u8 num_parents,
unsigned long flags, void __iomem *reg, u8 shift, u32 mask,
@@ -902,6 +909,12 @@ struct clk *clk_register_mux_table(struct device *dev, 
const char *name,
__clk_hw_register_mux((dev), NULL, (name), (num_parents), NULL, NULL, \
  (parent_data), (flags), (reg), (shift), \
  BIT((width)) - 1, (clk_mux_flags), NULL, (lock))
+#define devm_clk_hw_register_mux(dev, name, parent_names, num_parents, flags, 
reg, \
+   shift, width, clk_mux_flags, lock)\
+   __devm_clk_hw_register_mux((dev), NULL, (name), (num_parents),\
+ (parent_names), NULL, NULL, (flags), (reg), \
+ (shift), BIT((width)) - 1, (clk_mux_flags), \
+ NULL, (lock))
 
 int clk_mux_val_to_index(struct clk_hw *hw, u32 *table, unsigned int flags,
 unsigned int val);
-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH v4 02/24] clk: divider: add devm_clk_hw_register_divider

2021-03-31 Thread Dmitry Baryshkov
Add devm_clk_hw_register_divider() - devres version of
clk_hw_register_divider().

Signed-off-by: Dmitry Baryshkov 
Reviewed-by: Abhinav Kumar 
Acked-by: Stephen Boyd 
---
 include/linux/clk-provider.h | 17 +
 1 file changed, 17 insertions(+)

diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
index 9cf7ecc62f7c..6273a841f51f 100644
--- a/include/linux/clk-provider.h
+++ b/include/linux/clk-provider.h
@@ -785,6 +785,23 @@ struct clk *clk_register_divider_table(struct device *dev, 
const char *name,
  (parent_data), (flags), (reg), (shift), \
  (width), (clk_divider_flags), (table),  \
  (lock))
+/**
+ * devm_clk_hw_register_divider - register a divider clock with the clock 
framework
+ * @dev: device registering this clock
+ * @name: name of this clock
+ * @parent_name: name of clock's parent
+ * @flags: framework-specific flags
+ * @reg: register address to adjust divider
+ * @shift: number of bits to shift the bitfield
+ * @width: width of the bitfield
+ * @clk_divider_flags: divider-specific flags for this clock
+ * @lock: shared register lock for this clock
+ */
+#define devm_clk_hw_register_divider(dev, name, parent_name, flags, reg, 
shift,\
+   width, clk_divider_flags, lock)   \
+   __devm_clk_hw_register_divider((dev), NULL, (name), (parent_name), 
NULL,   \
+ NULL, (flags), (reg), (shift), (width), \
+ (clk_divider_flags), NULL, (lock))
 /**
  * devm_clk_hw_register_divider_table - register a table based divider clock
  * with the clock framework (devres variant)
-- 
2.30.2

___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


[Freedreno] [PATCH v4 00/24] drm/msm/dsi: refactor MSM DSI PHY/PLL drivers

2021-03-31 Thread Dmitry Baryshkov
Restructure MSM DSI PHY drivers. What started as an attempt to grok the
overcomplicated PHY drivers, has lead up to the idea of merging PHY and
PLL code, reducing abstractions, code duplication, dropping dead code,
etc.

The patches were mainly tested on RB5 (sm8250, 7nm) and DB410c (apq8016,
28nm-lp) and lightly tested on RB3 (sdm845, 10nm).

This patchet depends on the patch "clk: fixed: add devm helper for
clk_hw_register_fixed_factor()", which was merged in 5.12-rc1:
https://lore.kernel.org/r/20210211052206.2955988-4-dan...@0x0f.com


Changes since v3:
 - Rename save_state/restore_state functions/callbacks
 - Still mention DSI_1 when determining settings for slave PHYs in 14nm
   and 28nm drivers.
 - Stop including the external dependency merged upstream long ago. It
   is properly mentioned in the patchset description.

Changes since v2:
 - Drop the 'stop setting clock parents manually' patch for now together
   with the dtsi changes. Unlike the rest of patchset it provides
   functional changes and might require additional discussion.
   The patchset will be resubmitted later.

Changes since v1:
 - Rebase on top of msm/msm-next
 - Reorder patches to follow logical sequence
 - Add sc7180 clocks assignment
 - Drop sm8250 clocks assignment, as respective file is not updated in
   msm/msm-next

Changes since RFC:
 - Reorder patches to move global clock patches in the beginning and
   dtsi patches where they are required.
 - remove msm_dsi_phy_set_src_pll() and guess src_pll_id using PHY usecase.


___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno