Re: [PATCH v2 4/5] drm/msm/mdss: Handle the reg bus ICC path

2023-04-24 Thread Georgi Djakov

Hi Konrad,

On 18.04.23 15:10, Konrad Dybcio wrote:

Apart from the already handled data bus (MAS_MDP_Pn<->DDR), there's
another path that needs to be handled to ensure MDSS functions properly,
namely the "reg bus", a.k.a the CPU-MDSS interconnect.

Gating that path may have a variety of effects.. from none to otherwise
inexplicable DSI timeouts..

On the MDSS side, we only have to ensure that it's on at what Qualcomm
downstream calls "77 MHz", a.k.a 76.8 Mbps and turn it off at suspend.

To achieve that, make msm_mdss_icc_request_bw() accept a boolean to
indicate whether we want the busses to be on or off, as this function's
only use is to vote for minimum or no bandwidth at all.

Signed-off-by: Konrad Dybcio 
---
  drivers/gpu/drm/msm/msm_mdss.c | 17 +
  1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_mdss.c b/drivers/gpu/drm/msm/msm_mdss.c

[..]

-static void msm_mdss_icc_request_bw(struct msm_mdss *msm_mdss, unsigned long 
bw)
+static void msm_mdss_icc_request_bw(struct msm_mdss *msm_mdss, bool enable)
  {
int i;
  
  	for (i = 0; i < msm_mdss->num_mdp_paths; i++)

-   icc_set_bw(msm_mdss->mdp_path[i], 0, Bps_to_icc(bw));
+   icc_set_bw(msm_mdss->mdp_path[i], 0, enable ? 
Bps_to_icc(MIN_IB_BW) : 0);
+
+   if (msm_mdss->reg_bus_path)
+   icc_set_bw(msm_mdss->reg_bus_path, 0, enable ? 76800 : 0);


Please use Bps_to_icc, kbps_to_icc or any of the other macros.

BR,
Georgi


Re: [PATCH v3 08/11] arm64: dts: qcom: sm8350: Use 2 interconnect cells

2022-12-05 Thread Georgi Djakov

Hi Robert,

On 5.12.22 18:37, Robert Foss wrote:

Use two interconnect cells in order to optionally
support a path tag.

Signed-off-by: Robert Foss 
Reviewed-by: Konrad Dybcio 
---
  arch/arm64/boot/dts/qcom/sm8350.dtsi | 28 ++--
  1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi 
b/arch/arm64/boot/dts/qcom/sm8350.dtsi
index 805d53d91952..434f8e8b12c1 100644
--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
@@ -1543,56 +1543,56 @@ apps_smmu: iommu@1500 {
config_noc: interconnect@150 {
compatible = "qcom,sm8350-config-noc";
reg = <0 0x0150 0 0xa580>;
-   #interconnect-cells = <1>;
+   #interconnect-cells = <2>;
qcom,bcm-voters = <&apps_bcm_voter>;
};
  
  		mc_virt: interconnect@158 {

compatible = "qcom,sm8350-mc-virt";
reg = <0 0x0158 0 0x1000>;
-   #interconnect-cells = <1>;
+   #interconnect-cells = <2>;
qcom,bcm-voters = <&apps_bcm_voter>;
};

[..]

@@ -1620,8 +1620,8 @@ ipa: ipa@1e4 {
clocks = <&rpmhcc RPMH_IPA_CLK>;
clock-names = "core";
  
-			interconnects = <&aggre2_noc MASTER_IPA &mc_virt SLAVE_EBI1>,

-   <&gem_noc MASTER_APPSS_PROC &config_noc 
SLAVE_IPA_CFG>;
+   interconnects = <&aggre2_noc MASTER_IPA 0 &mc_virt 
SLAVE_EBI1 0>,
+   <&gem_noc MASTER_APPSS_PROC 0 &config_noc 
SLAVE_IPA_CFG 0>;
interconnect-names = "memory",
 "config";
  
@@ -1661,7 +1661,7 @@ mpss: remoteproc@408 {

<&rpmhpd SM8350_MSS>;
power-domain-names = "cx", "mss";
  
-			interconnects = <&mc_virt MASTER_LLCC &mc_virt SLAVE_EBI1>;

+   interconnects = <&mc_virt MASTER_LLCC &mc_virt SLAVE_EBI1 
0>;


The second cell for the first endpoint is missing, so this should be:
interconnects = <&mc_virt MASTER_LLCC 0 &mc_virt SLAVE_EBI1 0>;

Thanks,
Georgi

  
  			memory-region = <&pil_modem_mem>;
  
@@ -2239,7 +2239,7 @@ cdsp: remoteproc@9890 {

<&rpmhpd SM8350_MXC>;
power-domain-names = "cx", "mxc";
  
-			interconnects = <&compute_noc MASTER_CDSP_PROC &mc_virt SLAVE_EBI1>;

+   interconnects = <&compute_noc MASTER_CDSP_PROC 0 &mc_virt 
SLAVE_EBI1 0>;
  
  			memory-region = <&pil_cdsp_mem>;
  
@@ -2421,14 +2421,14 @@ usb_2_ssphy: phy@88ebe00 {

dc_noc: interconnect@90c {
compatible = "qcom,sm8350-dc-noc";
reg = <0 0x090c 0 0x4200>;
-   #interconnect-cells = <1>;
+   #interconnect-cells = <2>;
qcom,bcm-voters = <&apps_bcm_voter>;
};
  
  		gem_noc: interconnect@910 {

compatible = "qcom,sm8350-gem-noc";
reg = <0 0x0910 0 0xb4000>;
-   #interconnect-cells = <1>;
+   #interconnect-cells = <2>;
qcom,bcm-voters = <&apps_bcm_voter>;
};
  




Re: [PATCH] dt-bindings: Improve phandle-array schemas

2022-01-19 Thread Georgi Djakov



On 19.01.22 3:50, Rob Herring wrote:

The 'phandle-array' type is a bit ambiguous. It can be either just an
array of phandles or an array of phandles plus args. Many schemas for
phandle-array properties aren't clear in the schema which case applies
though the description usually describes it.

The array of phandles case boils down to needing:

items:
   maxItems: 1

The phandle plus args cases should typically take this form:

items:
   - items:
   - description: A phandle
   - description: 1st arg cell
   - description: 2nd arg cell

With this change, some examples need updating so that the bracketing of
property values matches the schema.


[..]

  .../bindings/interconnect/qcom,rpmh.yaml  |  2 +


Acked-by: Georgi Djakov 


Re: [PATCH 0/3] iommu/drm/msm: Allow non-coherent masters to use system cache

2021-07-28 Thread Georgi Djakov
On Mon, Jan 11, 2021 at 07:45:02PM +0530, Sai Prakash Ranjan wrote:
> commit ecd7274fb4cd ("iommu: Remove unused IOMMU_SYS_CACHE_ONLY flag")
> removed unused IOMMU_SYS_CACHE_ONLY prot flag and along with it went
> the memory type setting required for the non-coherent masters to use
> system cache. Now that system cache support for GPU is added, we will
> need to set the right PTE attribute for GPU buffers to be sys cached.
> Without this, the system cache lines are not allocated for GPU.
> 
> So the patches in this series introduces a new prot flag IOMMU_LLC,
> renames IO_PGTABLE_QUIRK_ARM_OUTER_WBWA to IO_PGTABLE_QUIRK_PTW_LLC
> and makes GPU the user of this protection flag.

Hi Sai,

Thank you for the patchset! Are you planning to refresh it, as it does
not apply anymore?

Thanks,
Georgi

> 
> The series slightly depends on following 2 patches posted earlier and
> is based on msm-next branch:
>  * https://lore.kernel.org/patchwork/patch/1363008/
>  * https://lore.kernel.org/patchwork/patch/1363010/
> 
> Sai Prakash Ranjan (3):
>   iommu/io-pgtable: Rename last-level cache quirk to
> IO_PGTABLE_QUIRK_PTW_LLC
>   iommu/io-pgtable-arm: Add IOMMU_LLC page protection flag
>   drm/msm: Use IOMMU_LLC page protection flag to map gpu buffers
> 
>  drivers/gpu/drm/msm/adreno/a6xx_gpu.c   | 3 +++
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c | 2 +-
>  drivers/gpu/drm/msm/msm_iommu.c | 3 +++
>  drivers/gpu/drm/msm/msm_mmu.h   | 4 
>  drivers/iommu/io-pgtable-arm.c  | 9 ++---
>  include/linux/io-pgtable.h  | 6 +++---
>  include/linux/iommu.h   | 6 ++
>  7 files changed, 26 insertions(+), 7 deletions(-)
> 
> 
> base-commit: 00fd44a1a4700718d5d962432b55c09820f7e709
> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
> of Code Aurora Forum, hosted by The Linux Foundation
> 


Re: [PATCH v3 2/3] dt-bindings: Clean-up OPP binding node names in examples

2021-07-20 Thread Georgi Djakov

On 20.07.21 17:41, Rob Herring wrote:

In preparation to convert OPP bindings to DT schema, clean-up a few OPP
binding node names in the binding examples.

Cc: Georgi Djakov 
Cc: Shawn Guo 
Cc: Sascha Hauer 
Cc: Leonard Crestez 
Cc: dri-devel@lists.freedesktop.org
Cc: linux...@vger.kernel.org
Acked-by: Viresh Kumar 
Signed-off-by: Rob Herring 


Acked-by: Georgi Djakov 


---
  Documentation/devicetree/bindings/gpu/arm,mali-bifrost.yaml   | 2 +-
  Documentation/devicetree/bindings/gpu/arm,mali-midgard.yaml   | 2 +-
  .../devicetree/bindings/interconnect/fsl,imx8m-noc.yaml   | 4 ++--
  3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/Documentation/devicetree/bindings/gpu/arm,mali-bifrost.yaml 
b/Documentation/devicetree/bindings/gpu/arm,mali-bifrost.yaml
index 0f73f436bea7..4bea51d1e7ea 100644
--- a/Documentation/devicetree/bindings/gpu/arm,mali-bifrost.yaml
+++ b/Documentation/devicetree/bindings/gpu/arm,mali-bifrost.yaml
@@ -136,7 +136,7 @@ examples:
resets = <&reset 0>, <&reset 1>;
  };
  
-gpu_opp_table: opp_table0 {

+gpu_opp_table: opp-table {
compatible = "operating-points-v2";
  
opp-53300 {

diff --git a/Documentation/devicetree/bindings/gpu/arm,mali-midgard.yaml 
b/Documentation/devicetree/bindings/gpu/arm,mali-midgard.yaml
index 696c17aedbbe..d209f272625d 100644
--- a/Documentation/devicetree/bindings/gpu/arm,mali-midgard.yaml
+++ b/Documentation/devicetree/bindings/gpu/arm,mali-midgard.yaml
@@ -160,7 +160,7 @@ examples:
#cooling-cells = <2>;
  };
  
-gpu_opp_table: opp_table0 {

+gpu_opp_table: opp-table {
compatible = "operating-points-v2";
  
opp-53300 {

diff --git a/Documentation/devicetree/bindings/interconnect/fsl,imx8m-noc.yaml 
b/Documentation/devicetree/bindings/interconnect/fsl,imx8m-noc.yaml
index a8873739d61a..b8204ed22dd5 100644
--- a/Documentation/devicetree/bindings/interconnect/fsl,imx8m-noc.yaml
+++ b/Documentation/devicetree/bindings/interconnect/fsl,imx8m-noc.yaml
@@ -81,10 +81,10 @@ examples:
  noc_opp_table: opp-table {
  compatible = "operating-points-v2";
  
-opp-133M {

+opp-1 {
  opp-hz = /bits/ 64 <1>;
  };
-opp-800M {
+opp-8 {
  opp-hz = /bits/ 64 <8>;
  };
  };





Re: [PATCH v2 1/7] iommu/io-pgtable: Introduce dynamic io-pgtable fmt registration

2020-12-25 Thread Georgi Djakov

Hi Isaac,

On 22.12.20 2:44, Isaac J. Manjarres wrote:

The io-pgtable code constructs an array of init functions for each
page table format at compile time. This is not ideal, as this
increases the footprint of the io-pgtable code, as well as prevents
io-pgtable formats from being built as kernel modules.

In preparation for modularizing the io-pgtable formats, switch to a
dynamic registration scheme, where each io-pgtable format can register
their init functions with the io-pgtable code at boot or module
insertion time.

Signed-off-by: Isaac J. Manjarres 
---
  drivers/iommu/io-pgtable-arm-v7s.c | 34 +-
  drivers/iommu/io-pgtable-arm.c | 90 ++--
  drivers/iommu/io-pgtable.c | 94 --
  include/linux/io-pgtable.h | 51 +
  4 files changed, 209 insertions(+), 60 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm-v7s.c 
b/drivers/iommu/io-pgtable-arm-v7s.c
index 1d92ac9..89aad2f 100644
--- a/drivers/iommu/io-pgtable-arm-v7s.c
+++ b/drivers/iommu/io-pgtable-arm-v7s.c
@@ -28,6 +28,7 @@

[..]

+static int __init arm_lpae_init(void)
+{
+   int ret, i;
+
+   for (i = 0; i < ARRAY_SIZE(io_pgtable_arm_lpae_init_fns); i++) {
+   ret = io_pgtable_ops_register(&io_pgtable_arm_lpae_init_fns[i]);
+   if (ret < 0) {
+   pr_err("Failed to register ARM LPAE fmt: %d\n");


I guess we want to print the format here?

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v10 01/19] dt-bindings: memory: tegra20: emc: Document opp-supported-hw property

2020-12-01 Thread Georgi Djakov

On 23.11.20 2:27, Dmitry Osipenko wrote:

Document opp-supported-hw property, which is not strictly necessary to
have on Tegra20, but it's very convenient to have because all other SoC
core devices will use hardware versioning, and thus, it's good to maintain
the consistency.


Hi Dmitry,

I believe Krzysztof is waiting for Ack on the binding before merging
this patch (and the rest), but unfortunately it was not sent to the
DT mailing list for review.

Thanks,
Georgi



Signed-off-by: Dmitry Osipenko 
---
  .../bindings/memory-controllers/nvidia,tegra20-emc.txt  | 6 ++
  1 file changed, 6 insertions(+)

diff --git 
a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.txt 
b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.txt
index 67ac8d1297da..fe99ce1013bd 100644
--- 
a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.txt
+++ 
b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.txt
@@ -16,6 +16,12 @@ Properties:
  - #interconnect-cells : Should be 0.
  - operating-points-v2: See ../bindings/opp/opp.txt for details.
  
+For each opp entry in 'operating-points-v2' table:

+- opp-supported-hw: One bitfield indicating SoC process ID mask
+
+   A bitwise AND is performed against this value and if any bit
+   matches, the OPP gets enabled.
+
  Optional properties:
  - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
  



___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v10 06/19] memory: tegra124: Support interconnect framework

2020-11-23 Thread Georgi Djakov

On 23.11.20 2:27, Dmitry Osipenko wrote:

Now Internal and External memory controllers are memory interconnection
providers. This allows us to use interconnect API for tuning of memory
configuration. EMC driver now supports OPPs and DVFS.

Tested-by: Nicolas Chauvet 
Signed-off-by: Dmitry Osipenko 


Acked-by: Georgi Djakov 

Thanks,
Georgi


---
  drivers/memory/tegra/Kconfig|   1 +
  drivers/memory/tegra/tegra124-emc.c | 320 +++-
  drivers/memory/tegra/tegra124.c |  82 ++-
  3 files changed, 391 insertions(+), 12 deletions(-)

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v10 03/19] memory: tegra30: Support interconnect framework

2020-11-23 Thread Georgi Djakov

Hi Dmitry,

On 23.11.20 2:27, Dmitry Osipenko wrote:

Now Internal and External memory controllers are memory interconnection
providers. This allows us to use interconnect API for tuning of memory
configuration. EMC driver now supports OPPs and DVFS. MC driver now
supports tuning of memory arbitration latency, which needs to be done
for ISO memory clients, like a Display client for example.

Tested-by: Peter Geis 
Signed-off-by: Dmitry Osipenko 


Acked-by: Georgi Djakov 

Thank you for the continuous work on this patchset!

BR,
Georgi


---
  drivers/memory/tegra/Kconfig   |   1 +
  drivers/memory/tegra/tegra30-emc.c | 344 +++--
  drivers/memory/tegra/tegra30.c | 173 ++-
  3 files changed, 496 insertions(+), 22 deletions(-)

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v9 01/17] memory: tegra30: Support interconnect framework

2020-11-19 Thread Georgi Djakov

On 18.11.20 0:02, Dmitry Osipenko wrote:

17.11.2020 23:24, Georgi Djakov пишет:

Hi Dmitry,

Thank you working on this!

On 15.11.20 23:29, Dmitry Osipenko wrote:

Now Internal and External memory controllers are memory interconnection
providers. This allows us to use interconnect API for tuning of memory
configuration. EMC driver now supports OPPs and DVFS. MC driver now
supports tuning of memory arbitration latency, which needs to be done
for ISO memory clients, like a Display client for example.

Tested-by: Peter Geis 
Signed-off-by: Dmitry Osipenko 
---
   drivers/memory/tegra/Kconfig   |   1 +
   drivers/memory/tegra/tegra30-emc.c | 349 +++--
   drivers/memory/tegra/tegra30.c | 173 +-
   3 files changed, 501 insertions(+), 22 deletions(-)


[..]> diff --git a/drivers/memory/tegra/tegra30.c
b/drivers/memory/tegra/tegra30.c

index d0314f29608d..ea849003014b 100644
--- a/drivers/memory/tegra/tegra30.c
+++ b/drivers/memory/tegra/tegra30.c

[..]

+
+static int tegra30_mc_icc_set(struct icc_node *src, struct icc_node
*dst)
+{
+    struct tegra_mc *mc = icc_provider_to_tegra_mc(src->provider);
+    const struct tegra_mc_client *client = &mc->soc->clients[src->id];
+    u64 peak_bandwidth = icc_units_to_bps(src->peak_bw);
+
+    /*
+ * Skip pre-initialization that is done by icc_node_add(), which
sets
+ * bandwidth to maximum for all clients before drivers are loaded.
+ *
+ * This doesn't make sense for us because we don't have drivers
for all
+ * clients and it's okay to keep configuration left from bootloader
+ * during boot, at least for today.
+ */
+    if (src == dst)
+    return 0;


Nit: The "proper" way to express this should be to implement the
.get_bw() callback to return zero as initial average/peak bandwidth.
I'm wondering if this will work here?

The rest looks good to me!


Hello Georgi,

Returning zeros doesn't allow us to skip the initialization that is done
by provider->set(node, node) in icc_node_add(). It will reconfigure
memory latency in accordance to a zero memory bandwidth, which is wrong
to do.

It actually should be more preferred to preset bandwidth to a maximum
before all drivers are synced, but this should be done only once we will
wire up all drivers to use ICC framework. For now it's safer to keep the
default hardware configuration untouched.


Ok, thanks for clarifying! Is there a way to read this hardware 
configuration and convert it to initial bandwidth? That's the

idea of the get_bw() callback actually. I am just curious and
trying to get a better understanding how this works and if it
would be useful for Tegra.

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v9 01/17] memory: tegra30: Support interconnect framework

2020-11-18 Thread Georgi Djakov

Hi Dmitry,

Thank you working on this!

On 15.11.20 23:29, Dmitry Osipenko wrote:

Now Internal and External memory controllers are memory interconnection
providers. This allows us to use interconnect API for tuning of memory
configuration. EMC driver now supports OPPs and DVFS. MC driver now
supports tuning of memory arbitration latency, which needs to be done
for ISO memory clients, like a Display client for example.

Tested-by: Peter Geis 
Signed-off-by: Dmitry Osipenko 
---
  drivers/memory/tegra/Kconfig   |   1 +
  drivers/memory/tegra/tegra30-emc.c | 349 +++--
  drivers/memory/tegra/tegra30.c | 173 +-
  3 files changed, 501 insertions(+), 22 deletions(-)

[..]> diff --git a/drivers/memory/tegra/tegra30.c 
b/drivers/memory/tegra/tegra30.c

index d0314f29608d..ea849003014b 100644
--- a/drivers/memory/tegra/tegra30.c
+++ b/drivers/memory/tegra/tegra30.c

[..]

+
+static int tegra30_mc_icc_set(struct icc_node *src, struct icc_node *dst)
+{
+   struct tegra_mc *mc = icc_provider_to_tegra_mc(src->provider);
+   const struct tegra_mc_client *client = &mc->soc->clients[src->id];
+   u64 peak_bandwidth = icc_units_to_bps(src->peak_bw);
+
+   /*
+* Skip pre-initialization that is done by icc_node_add(), which sets
+* bandwidth to maximum for all clients before drivers are loaded.
+*
+* This doesn't make sense for us because we don't have drivers for all
+* clients and it's okay to keep configuration left from bootloader
+* during boot, at least for today.
+*/
+   if (src == dst)
+   return 0;


Nit: The "proper" way to express this should be to implement the
.get_bw() callback to return zero as initial average/peak bandwidth.
I'm wondering if this will work here?

The rest looks good to me!

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v7 2/6] interconnect: Add generic interconnect driver for Exynos SoCs

2020-11-04 Thread Georgi Djakov
Hi Sylwester,

Thank you for refreshing the patchset!

On 10/30/20 14:51, Sylwester Nawrocki wrote:
> This patch adds a generic interconnect driver for Exynos SoCs in order
> to provide interconnect functionality for each "samsung,exynos-bus"
> compatible device.
> 
> The SoC topology is a graph (or more specifically, a tree) and its
> edges are specified using the 'samsung,interconnect-parent' in the
> DT. Due to unspecified relative probing order, -EPROBE_DEFER may be
> propagated to ensure that the parent is probed before its children.
> 
> Each bus is now an interconnect provider and an interconnect node as
> well (cf. Documentation/interconnect/interconnect.rst), i.e. every bus
> registers itself as a node. Node IDs are not hardcoded but rather
> assigned dynamically at runtime. This approach allows for using this
> driver with various Exynos SoCs.
> 
> Frequencies requested via the interconnect API for a given node are
> propagated to devfreq using dev_pm_qos_update_request(). Please note
> that it is not an error when CONFIG_INTERCONNECT is 'n', in which
> case all interconnect API functions are no-op.
> 
> The bus-width DT property is to determine the interconnect data
> width and traslate requested bandwidth to clock frequency for each
> bus.
> 
> Signed-off-by: Artur Świgoń 
> Signed-off-by: Sylwester Nawrocki 
> ---
> Changes for v7:
>  - adjusted to the DT property changes: "interconnects" instead
>of "samsung,interconnect-parent", "samsung,data-clk-ratio"
>instead of "bus-width",
>  - adaptation to of_icc_get_from_provider() function changes
>in v5.10-rc1.
> 
> Changes for v6:
>  - corrected of_node dereferencing in exynos_icc_get_parent()
>function,
>  - corrected initialization of icc_node->name so as to avoid
>direct of_node->name dereferencing,
>  - added parsing of bus-width DT property.
> 
> Changes for v5:
>  - adjust to renamed exynos,interconnect-parent-node property,
>  - use automatically generated platform device id as the interconect
>node id instead of a now unavailable devfreq->id field,
>  - add icc_ prefix to some variables to make the code more self-commenting,
>  - use icc_nodes_remove() instead of icc_node_del() + icc_node_destroy(),
>  - adjust to exynos,interconnect-parent-node property rename to
>samsung,interconnect-parent,
>  - converted to a separate platform driver in drivers/interconnect.
> 
> ---
>  drivers/interconnect/Kconfig |   1 +
>  drivers/interconnect/Makefile|   1 +
>  drivers/interconnect/exynos/Kconfig  |   6 ++
>  drivers/interconnect/exynos/Makefile |   4 +
>  drivers/interconnect/exynos/exynos.c | 198 
> +++
>  5 files changed, 210 insertions(+)
>  create mode 100644 drivers/interconnect/exynos/Kconfig
>  create mode 100644 drivers/interconnect/exynos/Makefile
>  create mode 100644 drivers/interconnect/exynos/exynos.c
> 
[..]
> +
> +static struct platform_driver exynos_generic_icc_driver = {
> + .driver = {
> + .name = "exynos-generic-icc",

I think that you will need this:
.sync_state = icc_sync_state,

Thanks,
Georgi

> + },
> + .probe = exynos_generic_icc_probe,
> + .remove = exynos_generic_icc_remove,
> +};
> +module_platform_driver(exynos_generic_icc_driver);
> +
> +MODULE_DESCRIPTION("Exynos generic interconnect driver");
> +MODULE_AUTHOR("Artur Świgoń ");
> +MODULE_AUTHOR("Sylwester Nawrocki ");
> +MODULE_LICENSE("GPL v2");
> +MODULE_ALIAS("platform:exynos-generic-icc");
> 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v7 0/6] Exynos: Simple QoS for exynos-bus using interconnect

2020-11-04 Thread Georgi Djakov
Hi Chanwoo and Sylwester,

On 11/3/20 09:54, Chanwoo Choi wrote:
> Hi Sylwester,
> 
> When I tested this patchset on Odroid-U3,
> After setting 0 bps by interconnect[1][2],
> the frequency of devfreq devs sustain the high frequency
> according to the pm qos request.
> 
> So, I try to find the cause of this situation.
> In result, it seems that interconnect exynos driver
> updates the pm qos request to devfreq device
> during the kernel booting. Do you know why the exynos
> interconnect driver request the pm qos during probe
> without the mixer request?

That's probably because of the sync_state support, that was introduced
recently. The icc_sync_state callback needs to be added to the driver
(i just left a comment on that patch), and then check again if it works.

The idea of the sync_state is that there could be multiple users of a
path and we must wait for all consumers to tell their bandwidth needs.
Otherwise the first consumer may lower the bandwidth or disable a path
needed for another consumer (driver), which has not probed yet. So we
maintain a floor bandwidth until everyone has probed. By default the floor
bandwidth is INT_MAX, but can be overridden by implementing the get_bw()
callback.

Thanks,
Georgi

> 
> PS. The passive governor has a bug related to PM_QOS interface.
> So, I posted the patch[4].
> 
> 
> [1] interconnect_graph
> root@localhost:~# cat /sys/kernel/debug/interconnect/interconnect_graph 
> digraph {
> rankdir = LR
> node [shape = record]
> subgraph cluster_1 {
> label = "soc:bus_dmc"
> "2:bus_dmc" [label="2:bus_dmc
> |avg_bw=0kBps
> |peak_bw=0kBps"]
> }
> subgraph cluster_2 {
> label = "soc:bus_leftbus"
> "3:bus_leftbus" [label="3:bus_leftbus
> |avg_bw=0kBps
> |peak_bw=0kBps"]
> }
> subgraph cluster_3 {
> label = "soc:bus_display"
> "4:bus_display" [label="4:bus_display
> |avg_bw=0kBps
> |peak_bw=0kBps"]
> }
> "3:bus_leftbus" -> "2:bus_dmc"
> "4:bus_display" -> "3:bus_leftbus"
> 
> 
> [2] interconnect_summary
> root@localhost:~# cat /sys/kernel/debug/interconnect/interconnect_summary 
>  node  tag  avg peak
> 
> bus_dmc   00
>   12c1.mixer 000
> bus_leftbus   00
>   12c1.mixer 000
> bus_display   00
>   12c1.mixer 000
> 
> 
> [3] devfreq_summary
> root@localhost:~# cat /sys/kernel/debug/devfreq/devfreq_summary 
> devparent_dev governor
> timer  polling_ms  cur_freq_Hz  min_freq_Hz  max_freq_Hz
> -- -- --- 
> -- --   
> soc:bus_dmcnull   simple_ondemand 
> deferrable 50444
> soc:bus_acpsoc:bus_dmcpassive 
> null026700126700
> soc:bus_c2csoc:bus_dmcpassive 
> null0414
> soc:bus_leftbusnull   simple_ondemand 
> deferrable 50222
> soc:bus_rightbus   soc:bus_leftbuspassive 
> null0212
> soc:bus_displaysoc:bus_leftbuspassive 
> null0222
> soc:bus_fsys   soc:bus_leftbuspassive 
> null013400113400
> soc:bus_peri   soc:bus_leftbuspassive 
> null01 50001
> soc:bus_mfcsoc:bus_leftbuspassive 
> null0212
> 
> 
> [4] PM / devfreq: passive: Update frequency when start governor
> https://patchwork.kernel.org/project/linux-pm/patch/20201103070646.18687-1-cw00.c...@samsung.com/
> 
> 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/msm: Remove depends on interconnect

2020-09-16 Thread Georgi Djakov
The dependency on interconnect in the Kconfig was introduced to avoid
the case of interconnect=m and driver=y, but the interconnect framework
has been converted from tristate to bool now. Remove the dependency as
the framework can't be a module anymore.

Signed-off-by: Georgi Djakov 
---
 drivers/gpu/drm/msm/Kconfig | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
index 5c55cd0ce9f9..3348969460ab 100644
--- a/drivers/gpu/drm/msm/Kconfig
+++ b/drivers/gpu/drm/msm/Kconfig
@@ -6,7 +6,6 @@ config DRM_MSM
depends on ARCH_QCOM || SOC_IMX5 || (ARM && COMPILE_TEST)
depends on OF && COMMON_CLK
depends on MMU
-   depends on INTERCONNECT || !INTERCONNECT
depends on QCOM_OCMEM || QCOM_OCMEM=n
select QCOM_MDT_LOADER if ARCH_QCOM
select REGULATOR
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH RFC v6 1/6] dt-bindings: exynos-bus: Add documentation for interconnect properties

2020-09-16 Thread Georgi Djakov
Hi Sylwester,

On 9/9/20 17:47, Sylwester Nawrocki wrote:
> Hi Georgi,
> 
> On 09.09.2020 11:07, Georgi Djakov wrote:
>> On 8/28/20 17:49, Sylwester Nawrocki wrote:
>>> On 30.07.2020 14:28, Sylwester Nawrocki wrote:
>>>> On 09.07.2020 23:04, Rob Herring wrote:
>>>>> On Thu, Jul 02, 2020 at 06:37:19PM +0200, Sylwester Nawrocki wrote:
>>>>>> Add documentation for new optional properties in the exynos bus nodes:
>>>>>> samsung,interconnect-parent, #interconnect-cells, bus-width.
>>>>>> These properties allow to specify the SoC interconnect structure which
>>>>>> then allows the interconnect consumer devices to request specific
>>>>>> bandwidth requirements.
>>>>>>
>>>>>> Signed-off-by: Artur Świgoń 
>>>>>> Signed-off-by: Sylwester Nawrocki 
>>>
>>>>>> --- a/Documentation/devicetree/bindings/devfreq/exynos-bus.txt
>>>>>> +++ b/Documentation/devicetree/bindings/devfreq/exynos-bus.txt
> 
>>>>>> +Optional properties for interconnect functionality (QoS frequency 
>>>>>> constraints):
>>>>>> +- samsung,interconnect-parent: phandle to the parent interconnect node; 
>>>>>> for
>>>>>> +  passive devices should point to same node as the exynos,parent-bus 
>>>>>> property.
>>>
>>>>> Adding vendor specific properties for a common binding defeats the 
>>>>> point.
>>>
>>> Actually we could do without any new property if we used existing 
>>> interconnect
>>> consumers binding to specify linking between the provider nodes. I think 
>>> those
>>> exynos-bus nodes could well be considered both the interconnect providers 
>>> and consumers. The example would then be something along the lines 
>>> (yes, I know the bus node naming needs to be fixed):
>>>
>>> soc {
>>> bus_dmc: bus_dmc {
>>> compatible = "samsung,exynos-bus";
>>> /* ... */
>>> samsung,data-clock-ratio = <4>;
>>> #interconnect-cells = <0>;
>>> };
>>>
>>> bus_leftbus: bus_leftbus {
>>> compatible = "samsung,exynos-bus";
>>> /* ... */
>>> interconnects = <&bus_leftbus &bus_dmc>;
>>> #interconnect-cells = <0>;
>>> };
>>>
>>> bus_display: bus_display {
>>> compatible = "samsung,exynos-bus";
>>> /* ... */
>>> interconnects = <&bus_display &bus_leftbus>;
>>
>> Hmm, bus_display being a consumer of itself is a bit odd? Did you mean:
>>  interconnects = <&bus_dmc &bus_leftbus>;
> 
> Might be, but we would need to swap the phandles so 
> order is maintained, i.e. interconnects = <&bus_leftbus &bus_dmc>;

Ok, i see. Thanks for clarifying! Currently the "interconnects" property is
defined as a pair of initiator and target nodes. You can keep it also as
interconnects = <&bus_display &bus_dmc>, but you will need to figure out
during probe that there is another node in the middle and defer. I assume
that if a provider is also an interconnect consumer, we will link it to
whatever nodes are specified in the "interconnects" property?

> My intention here was to describe the 'bus_display -> bus_leftbus' part 
> of data path 'bus_display -> bus_leftbus -> bus_dmc', bus_display is
> really a consumer of 'bus_leftbus -> bus_dmc' path.
>
> I'm not sure if it is allowed to specify only single phandle (and 
> interconnect provider specifier) in the interconnect property, that would
> be needed for the bus_leftbus node to define bus_dmc as the interconnect 
> destination port. There seems to be such a use case in arch/arm64/boot/
> dts/allwinner/sun50i-a64.dtsi.

In the general case you have to specify pairs. The "dma-mem" is a reserved
name for devices that perform DMA through another bus with separate address
translation rules.

>>> #interconnect-cells = <0>;
>>> };
>>>
>>>
>>> &mixer {
>>> compatible = "samsung,exynos4212-mixer";
>>> 

Re: [PATCH v5 27/36] memory: tegra-mc: Register as interconnect provider

2020-09-10 Thread Georgi Djakov
On 8/14/20 03:06, Dmitry Osipenko wrote:
> Now memory controller is a memory interconnection provider. This allows us
> to use interconnect API in order to change memory configuration.
> 
> Signed-off-by: Dmitry Osipenko 

Thanks Dmitry! Looks good to me.

Acked-by: Georgi Djakov 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 33/36] memory: tegra30-emc: Register as interconnect provider

2020-09-10 Thread Georgi Djakov
On 8/14/20 03:06, Dmitry Osipenko wrote:
> Now external memory controller is a memory interconnection provider.
> This allows us to use interconnect API to change memory configuration.
> 
> Signed-off-by: Dmitry Osipenko 

Acked-by: Georgi Djakov 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 30/36] memory: tegra20-emc: Register as interconnect provider

2020-09-10 Thread Georgi Djakov
On 8/14/20 03:06, Dmitry Osipenko wrote:
> Now memory controller is a memory interconnection provider. This allows us
> to use interconnect API in order to change memory configuration.
> 
> Signed-off-by: Dmitry Osipenko 

Acked-by: Georgi Djakov 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH RFC v6 1/6] dt-bindings: exynos-bus: Add documentation for interconnect properties

2020-09-10 Thread Georgi Djakov
Hi Sylwester,

On 8/28/20 17:49, Sylwester Nawrocki wrote:
> On 30.07.2020 14:28, Sylwester Nawrocki wrote:
>> On 09.07.2020 23:04, Rob Herring wrote:
>>> On Thu, Jul 02, 2020 at 06:37:19PM +0200, Sylwester Nawrocki wrote:
 Add documentation for new optional properties in the exynos bus nodes:
 samsung,interconnect-parent, #interconnect-cells, bus-width.
 These properties allow to specify the SoC interconnect structure which
 then allows the interconnect consumer devices to request specific
 bandwidth requirements.

 Signed-off-by: Artur Świgoń 
 Signed-off-by: Sylwester Nawrocki 
> 
 --- a/Documentation/devicetree/bindings/devfreq/exynos-bus.txt
 +++ b/Documentation/devicetree/bindings/devfreq/exynos-bus.txt
 @@ -51,6 +51,13 @@ Optional properties only for parent bus device:
  - exynos,saturation-ratio: the percentage value which is used to calibrate
the performance count against total cycle count.
  
 +Optional properties for interconnect functionality (QoS frequency 
 constraints):
 +- samsung,interconnect-parent: phandle to the parent interconnect node; 
 for
 +  passive devices should point to same node as the exynos,parent-bus 
 property.
> 
>>> Adding vendor specific properties for a common binding defeats the 
>>> point.
> 
> Actually we could do without any new property if we used existing interconnect
> consumers binding to specify linking between the provider nodes. I think those
> exynos-bus nodes could well be considered both the interconnect providers 
> and consumers. The example would then be something along the lines 
> (yes, I know the bus node naming needs to be fixed):
> 
>   soc {
>   bus_dmc: bus_dmc {
>   compatible = "samsung,exynos-bus";
>   /* ... */
>   samsung,data-clock-ratio = <4>;
>   #interconnect-cells = <0>;
>   };
> 
>   bus_leftbus: bus_leftbus {
>   compatible = "samsung,exynos-bus";
>   /* ... */
>   interconnects = <&bus_leftbus &bus_dmc>;
>   #interconnect-cells = <0>;
>   };
> 
>   bus_display: bus_display {
>   compatible = "samsung,exynos-bus";
>   /* ... */
>   interconnects = <&bus_display &bus_leftbus>;

Hmm, bus_display being a consumer of itself is a bit odd? Did you mean:
interconnects = <&bus_dmc &bus_leftbus>;

>   #interconnect-cells = <0>;
>   };
> 
> 
>   &mixer {
>   compatible = "samsung,exynos4212-mixer";
>   interconnects = <&bus_display &bus_dmc>;
>   /* ... */
>   };
>   };
> 
> What do you think, Georgi, Rob?

I can't understand the above example with bus_display being it's own consumer.
This seems strange to me. Could you please clarify it?

Otherwise the interconnect consumer DT bindings are already well established
and i don't see anything preventing a node to be both consumer and provider.
So this should be okay in general.

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH] interconnect: qcom: add functions to query addr/cmds for a path

2020-07-14 Thread Georgi Djakov
On 7/1/20 07:25, Jonathan Marek wrote:
> The a6xx GMU can vote for ddr and cnoc bandwidth, but it needs to be able
> to query the interconnect driver for bcm addresses and commands.

It's not very clear to me how the GMU firmware would be dealing with this? Does
anyone have an idea whether the GMU makes any bandwidth decisions? Or is it just
a static configuration and it just enables/disables a TCS?

I think that we can query the address from the cmd-db, but we have to know the
bcm names and the path. All the BCM/TCS information looks to be very low-level
and implementation specific, so exposing it through an API is not very good,
but hard-coding all this information is not good either.

Thanks,
Georgi

> 
> I'm not sure what is the best way to go about implementing this, this is
> what I came up with.
> 
> I included a quick example of how this can be used by the a6xx driver to
> fill out the GMU bw_table (two ddr bandwidth levels in this example, note
> this would be using the frequency table in dts and not hardcoded values).
> 
> Signed-off-by: Jonathan Marek 
> ---
>  drivers/gpu/drm/msm/adreno/a6xx_hfi.c | 20 ---
>  drivers/interconnect/qcom/icc-rpmh.c  | 50 +++
>  include/soc/qcom/icc.h| 11 ++
>  3 files changed, 68 insertions(+), 13 deletions(-)
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH v5 2/6] interconnect: Add generic interconnect driver for Exynos SoCs

2020-07-03 Thread Georgi Djakov
Hi Sylwester,

On 7/2/20 15:01, Sylwester Nawrocki wrote:
> Hi Georgi,
> 
> On 01.07.2020 14:50, Georgi Djakov wrote:
>> Thanks for the patch and apologies for the delayed reply.
> 
> Thanks, no problem. It's actually just in time as I put that patchset
> aside for a while and was just about to post an update.
>  
>> On 5/29/20 19:31, Sylwester Nawrocki wrote:
>>> This patch adds a generic interconnect driver for Exynos SoCs in order
>>> to provide interconnect functionality for each "samsung,exynos-bus"
>>> compatible device.
>>>
>>> The SoC topology is a graph (or more specifically, a tree) and its
>>> edges are specified using the 'samsung,interconnect-parent' in the
>>> DT. Due to unspecified relative probing order, -EPROBE_DEFER may be
>>> propagated to ensure that the parent is probed before its children.
>>>
>>> Each bus is now an interconnect provider and an interconnect node as
>>> well (cf. Documentation/interconnect/interconnect.rst), i.e. every bus
>>> registers itself as a node. Node IDs are not hardcoded but rather
>>> assigned dynamically at runtime. This approach allows for using this
>>> driver with various Exynos SoCs.
>>>
>>> Frequencies requested via the interconnect API for a given node are
>>> propagated to devfreq using dev_pm_qos_update_request(). Please note
>>> that it is not an error when CONFIG_INTERCONNECT is 'n', in which
>>> case all interconnect API functions are no-op.
>>>
>>> Signed-off-by: Artur Świgoń 
>>> Signed-off-by: Sylwester Nawrocki 
> 
>>> +static struct icc_node *exynos_icc_get_parent(struct device_node *np)
>>> +{
>>> +   struct of_phandle_args args;
>>> +   int num, ret;
>>> +
>>> +   num = of_count_phandle_with_args(np, "samsung,interconnect-parent",
>>> +   "#interconnect-cells");
>>> +   if (num != 1)
>>> +   return NULL; /* parent nodes are optional */
>>> +
>>> +   ret = of_parse_phandle_with_args(np, "samsung,interconnect-parent",
>>> +   "#interconnect-cells", 0, &args);
>>> +   if (ret < 0)
>>> +   return ERR_PTR(ret);
>>> +
>>> +   of_node_put(args.np);
>>> +
>>> +   return of_icc_get_from_provider(&args);
>>> +}
>>> +
>>> +
>>
>> Nit: multiple blank lines
> 
> Fixed.
> 
>> [..]
>>> +static struct icc_node *exynos_generic_icc_xlate(struct of_phandle_args 
>>> *spec,
>>> +void *data)
>>> +{
>>> +   struct exynos_icc_priv *priv = data;
>>> +
>>> +   if (spec->np != priv->dev->parent->of_node)
>>> +   return ERR_PTR(-EINVAL);
>>> +
>>> +   return priv->node;
>>> +}
>>> +
>>> +static int exynos_generic_icc_remove(struct platform_device *pdev)
>>> +{
>>> +   struct exynos_icc_priv *priv = platform_get_drvdata(pdev);
>>> +   struct icc_node *parent_node, *node = priv->node;
>>> +
>>> +   parent_node = exynos_icc_get_parent(priv->dev->parent->of_node);
>>> +   if (parent_node && !IS_ERR(parent_node))
>>
>> Nit: !IS_ERR_OR_NULL?
> 
> It was left on purpose that way but I changed it now to IS_ERR_OR_NULL.

Well, i have no strong opinion on that, it's up to you.

>>> +   icc_link_destroy(node, parent_node);
>>> +
>>> +   icc_nodes_remove(&priv->provider);
>>> +   icc_provider_del(&priv->provider);
>>> +
>>> +   return 0;
>>> +}
>>> +
>>> +static int exynos_generic_icc_probe(struct platform_device *pdev)
>>> +{
>>> +   struct device *bus_dev = pdev->dev.parent;
>>> +   struct exynos_icc_priv *priv;
>>> +   struct icc_provider *provider;
>>> +   struct icc_node *icc_node, *icc_parent_node;
>>> +   int ret;
>>> +
>>> +   priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
>>> +   if (!priv)
>>> +   return -ENOMEM;
>>> +
>>> +   priv->dev = &pdev->dev;
>>> +   platform_set_drvdata(pdev, priv);
>>> +
>>> +   provider = &priv->provider;
>>> +
>>> +   provider->set = exynos_generic_icc_set;
>>> +   provider->aggregate = icc_std_aggregate;
>>> +   provider->xlate 

Re: [PATCH v4 28/37] memory: tegra: Register as interconnect provider

2020-07-03 Thread Georgi Djakov
Hi Dmitry,

On 7/2/20 02:36, Dmitry Osipenko wrote:
> 01.07.2020 20:12, Georgi Djakov пишет:
>> Hi Dmitry,
>>
>> Thank you for updating the patches!
> 
> Hello, Georgi!
> 
> Thank you for the review!
> 
>> On 6/9/20 16:13, Dmitry Osipenko wrote:
>>> Now memory controller is a memory interconnection provider. This allows us
>>> to use interconnect API in order to change memory configuration.
>>>
>>> Signed-off-by: Dmitry Osipenko 
>>> ---
>>>  drivers/memory/tegra/Kconfig |   1 +
>>>  drivers/memory/tegra/mc.c| 114 +++
>>>  drivers/memory/tegra/mc.h|   8 +++
>>>  include/soc/tegra/mc.h   |   3 +
>>>  4 files changed, 126 insertions(+)
>>>
>>> diff --git a/drivers/memory/tegra/Kconfig b/drivers/memory/tegra/Kconfig
>>> index 5bf75b316a2f..7055fdef2c32 100644
>>> --- a/drivers/memory/tegra/Kconfig
>>> +++ b/drivers/memory/tegra/Kconfig
>>> @@ -3,6 +3,7 @@ config TEGRA_MC
>>> bool "NVIDIA Tegra Memory Controller support"
>>> default y
>>> depends on ARCH_TEGRA
>>> +   select INTERCONNECT
>>> help
>>>   This driver supports the Memory Controller (MC) hardware found on
>>>   NVIDIA Tegra SoCs.
>>> diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
>>> index 772aa021b5f6..7ef7ac9e103e 100644
>>> --- a/drivers/memory/tegra/mc.c
>>> +++ b/drivers/memory/tegra/mc.c
>>> @@ -594,6 +594,118 @@ static __maybe_unused irqreturn_t tegra20_mc_irq(int 
>>> irq, void *data)
>>> return IRQ_HANDLED;
>>>  }
>>>  
>>> +static int tegra_mc_icc_set(struct icc_node *src, struct icc_node *dst)
>>> +{
>>> +   return 0;
>>> +}
>>> +
>>> +static int tegra_mc_icc_aggregate(struct icc_node *node,
>>> + u32 tag, u32 avg_bw, u32 peak_bw,
>>> + u32 *agg_avg, u32 *agg_peak)
>>> +{
>>> +   *agg_avg = min((u64)avg_bw + (*agg_avg), (u64)U32_MAX);
>>> +   *agg_peak = max(*agg_peak, peak_bw);
>>> +
>>> +   return 0;
>>> +}
>>> +
>>> +/*
>>> + * Memory Controller (MC) has few Memory Clients that are issuing memory
>>> + * bandwidth allocation requests to the MC interconnect provider. The MC
>>> + * provider aggregates the requests and then sends the aggregated request
>>> + * up to the External Memory Controller (EMC) interconnect provider which
>>> + * re-configures hardware interface to External Memory (EMEM) in accordance
>>> + * to the required bandwidth. Each MC interconnect node represents an
>>> + * individual Memory Client.
>>> + *
>>> + * Memory interconnect topology:
>>> + *
>>> + *   ++
>>> + * ++||
>>> + * | TEXSRD +--->+|
>>> + * ++||
>>> + *   ||+-++--+
>>> + *...| MC +--->+ EMC +--->+ EMEM |
>>> + *   ||+-++--+
>>> + * ++||
>>> + * | DISP.. +--->+|
>>> + * ++||
>>> + *   ++
>>> + */
>>> +static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
>>> +{
>>> +   struct icc_onecell_data *data;
>>> +   struct icc_node *node;
>>> +   unsigned int num_nodes;
>>> +   unsigned int i;
>>> +   int err;
>>> +
>>> +   /* older device-trees don't have interconnect properties */
>>> +   if (!of_find_property(mc->dev->of_node, "#interconnect-cells", NULL))
>>> +   return 0;
>>> +
>>> +   num_nodes = mc->soc->num_clients;
>>> +
>>> +   data = devm_kzalloc(mc->dev, struct_size(data, nodes, num_nodes),
>>> +   GFP_KERNEL);
>>> +   if (!data)
>>> +   return -ENOMEM;
>>> +
>>> +   mc->provider.dev = mc->dev;
>>> +   mc->provider.set = tegra_mc_icc_set;
>>
>> Hmm, maybe the core should not require a set() implementation and we can
>> just make it optional instead. Then the dummy function would not be needed.
> 
> Eventually this dummy function might become populated with a memory
> latency allowness programming. I could add a comment into that function
> in the next version, saying that it's to-be-done for now.

Ah ok! Sounds good, thanks for clarifying!

>>> +   mc->provider.data = data;
>>> +   mc->provider.xlate = of_icc_xlate_onecell;
>>> +   mc->provider.aggregate = tegra_mc_icc_aggregate;
>>> +
>>> +   err = icc_provider_add(&mc->provider);
>>> +   if (err)
>>> +   goto err_msg;
>>
>> Nit: I am planning to re-organize some of the existing drivers to call
>> icc_provider_add() after the topology is populated. Could you please move
>> this after the nodes are created and linked.
> 
> Are you planning to remove the provider's list-head initialization from
> the icc_provider_add() [1] and move it to the individual provider
> drivers, correct?

Yes, that would be the first step, but i need to post some patches first,
so let's keep it as-is for now. Sorry for the confusion.

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v4 27/37] interconnect: Relax requirement in of_icc_get_from_provider()

2020-07-02 Thread Georgi Djakov
Hi Dmitry,

On 6/9/20 16:13, Dmitry Osipenko wrote:
> From: Artur Świgoń 
> 
> This patch relaxes the condition in of_icc_get_from_provider() so that it
> is no longer required to set #interconnect-cells = <1> in the DT. In case
> of the devfreq driver for exynos-bus, #interconnect-cells is always zero.
> 
> Signed-off-by: Artur Świgoń 
> [dig...@gmail.com: added cells_num checking for of_icc_xlate_onecell()]
> Signed-off-by: Dmitry Osipenko 

I have already applied the original patch by Artur, so please make the cells_num
check a separate patch.

Thanks,
Georgi

> ---
>  drivers/interconnect/core.c | 11 ++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
> index e5f998744501..cb143421ca67 100644
> --- a/drivers/interconnect/core.c
> +++ b/drivers/interconnect/core.c
> @@ -339,7 +339,7 @@ static struct icc_node *of_icc_get_from_provider(struct 
> of_phandle_args *spec)
>   struct icc_node *node = ERR_PTR(-EPROBE_DEFER);
>   struct icc_provider *provider;
>  
> - if (!spec || spec->args_count != 1)
> + if (!spec)
>   return ERR_PTR(-EINVAL);
>  
>   mutex_lock(&icc_lock);
> @@ -967,6 +967,15 @@ EXPORT_SYMBOL_GPL(icc_nodes_remove);
>   */
>  int icc_provider_add(struct icc_provider *provider)
>  {
> + struct device_node *np = provider->dev->of_node;
> + u32 cells_num;
> + int err;
> +
> + err = of_property_read_u32(np, "#interconnect-cells", &cells_num);
> + if (WARN_ON(err))
> + return err;
> + if (WARN_ON(provider->xlate == of_icc_xlate_onecell && cells_num != 1))
> + return -EINVAL;
>   if (WARN_ON(!provider->set))
>   return -EINVAL;
>   if (WARN_ON(!provider->xlate))
> 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH v5 2/6] interconnect: Add generic interconnect driver for Exynos SoCs

2020-07-02 Thread Georgi Djakov
Hi Sylwester,

Thanks for the patch and apologies for the delayed reply.

On 5/29/20 19:31, Sylwester Nawrocki wrote:
> This patch adds a generic interconnect driver for Exynos SoCs in order
> to provide interconnect functionality for each "samsung,exynos-bus"
> compatible device.
> 
> The SoC topology is a graph (or more specifically, a tree) and its
> edges are specified using the 'samsung,interconnect-parent' in the
> DT. Due to unspecified relative probing order, -EPROBE_DEFER may be
> propagated to ensure that the parent is probed before its children.
> 
> Each bus is now an interconnect provider and an interconnect node as
> well (cf. Documentation/interconnect/interconnect.rst), i.e. every bus
> registers itself as a node. Node IDs are not hardcoded but rather
> assigned dynamically at runtime. This approach allows for using this
> driver with various Exynos SoCs.
> 
> Frequencies requested via the interconnect API for a given node are
> propagated to devfreq using dev_pm_qos_update_request(). Please note
> that it is not an error when CONFIG_INTERCONNECT is 'n', in which
> case all interconnect API functions are no-op.
> 
> Signed-off-by: Artur Świgoń 
> Signed-off-by: Sylwester Nawrocki 
> 
> Changes for v5:
>  - adjust to renamed exynos,interconnect-parent-node property,
>  - use automatically generated platform device id as the interconect
>node id instead of a now unavailable devfreq->id field,
>  - add icc_ prefix to some variables to make the code more self-commenting,
>  - use icc_nodes_remove() instead of icc_node_del() + icc_node_destroy(),
>  - adjust to exynos,interconnect-parent-node property rename to
>samsung,interconnect-parent,
>  - converted to a separate platform driver in drivers/i.nterconnect.
> ---
>  drivers/interconnect/Kconfig |   1 +
>  drivers/interconnect/Makefile|   1 +
>  drivers/interconnect/exynos/Kconfig  |   6 ++
>  drivers/interconnect/exynos/Makefile |   4 +
>  drivers/interconnect/exynos/exynos.c | 185 
> +++
>  5 files changed, 197 insertions(+)
>  create mode 100644 drivers/interconnect/exynos/Kconfig
>  create mode 100644 drivers/interconnect/exynos/Makefile
>  create mode 100644 drivers/interconnect/exynos/exynos.c
[..]
> +static struct icc_node *exynos_icc_get_parent(struct device_node *np)
> +{
> + struct of_phandle_args args;
> + int num, ret;
> +
> + num = of_count_phandle_with_args(np, "samsung,interconnect-parent",
> + "#interconnect-cells");
> + if (num != 1)
> + return NULL; /* parent nodes are optional */
> +
> + ret = of_parse_phandle_with_args(np, "samsung,interconnect-parent",
> + "#interconnect-cells", 0, &args);
> + if (ret < 0)
> + return ERR_PTR(ret);
> +
> + of_node_put(args.np);
> +
> + return of_icc_get_from_provider(&args);
> +}
> +
> +

Nit: multiple blank lines

[..]
> +static struct icc_node *exynos_generic_icc_xlate(struct of_phandle_args 
> *spec,
> +  void *data)
> +{
> + struct exynos_icc_priv *priv = data;
> +
> + if (spec->np != priv->dev->parent->of_node)
> + return ERR_PTR(-EINVAL);
> +
> + return priv->node;
> +}
> +
> +static int exynos_generic_icc_remove(struct platform_device *pdev)
> +{
> + struct exynos_icc_priv *priv = platform_get_drvdata(pdev);
> + struct icc_node *parent_node, *node = priv->node;
> +
> + parent_node = exynos_icc_get_parent(priv->dev->parent->of_node);
> + if (parent_node && !IS_ERR(parent_node))

Nit: !IS_ERR_OR_NULL?

> + icc_link_destroy(node, parent_node);
> +
> + icc_nodes_remove(&priv->provider);
> + icc_provider_del(&priv->provider);
> +
> + return 0;
> +}
> +
> +static int exynos_generic_icc_probe(struct platform_device *pdev)
> +{
> + struct device *bus_dev = pdev->dev.parent;
> + struct exynos_icc_priv *priv;
> + struct icc_provider *provider;
> + struct icc_node *icc_node, *icc_parent_node;
> + int ret;
> +
> + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
> + if (!priv)
> + return -ENOMEM;
> +
> + priv->dev = &pdev->dev;
> + platform_set_drvdata(pdev, priv);
> +
> + provider = &priv->provider;
> +
> + provider->set = exynos_generic_icc_set;
> + provider->aggregate = icc_std_aggregate;
> + provider->xlate = exynos_generic_icc_xlate;
> + provider->dev = bus_dev;
> + provider->inter_set = true;
> + provider->data = priv;
> +
> + ret = icc_provider_add(provider);

Nit: Maybe it would be better to move this after the node is created. The
idea is to create the nodes first and add the provider when the topology is
populated. It's fine either way here, but i am planning to change this in
some of the existing provider drivers.

> + if (ret < 0)
> + return ret;
> +
> + icc_node = icc_node_create(pdev->i

Re: [PATCH v4 28/37] memory: tegra: Register as interconnect provider

2020-07-02 Thread Georgi Djakov
Hi Dmitry,

Thank you for updating the patches!

On 6/9/20 16:13, Dmitry Osipenko wrote:
> Now memory controller is a memory interconnection provider. This allows us
> to use interconnect API in order to change memory configuration.
> 
> Signed-off-by: Dmitry Osipenko 
> ---
>  drivers/memory/tegra/Kconfig |   1 +
>  drivers/memory/tegra/mc.c| 114 +++
>  drivers/memory/tegra/mc.h|   8 +++
>  include/soc/tegra/mc.h   |   3 +
>  4 files changed, 126 insertions(+)
> 
> diff --git a/drivers/memory/tegra/Kconfig b/drivers/memory/tegra/Kconfig
> index 5bf75b316a2f..7055fdef2c32 100644
> --- a/drivers/memory/tegra/Kconfig
> +++ b/drivers/memory/tegra/Kconfig
> @@ -3,6 +3,7 @@ config TEGRA_MC
>   bool "NVIDIA Tegra Memory Controller support"
>   default y
>   depends on ARCH_TEGRA
> + select INTERCONNECT
>   help
> This driver supports the Memory Controller (MC) hardware found on
> NVIDIA Tegra SoCs.
> diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
> index 772aa021b5f6..7ef7ac9e103e 100644
> --- a/drivers/memory/tegra/mc.c
> +++ b/drivers/memory/tegra/mc.c
> @@ -594,6 +594,118 @@ static __maybe_unused irqreturn_t tegra20_mc_irq(int 
> irq, void *data)
>   return IRQ_HANDLED;
>  }
>  
> +static int tegra_mc_icc_set(struct icc_node *src, struct icc_node *dst)
> +{
> + return 0;
> +}
> +
> +static int tegra_mc_icc_aggregate(struct icc_node *node,
> +   u32 tag, u32 avg_bw, u32 peak_bw,
> +   u32 *agg_avg, u32 *agg_peak)
> +{
> + *agg_avg = min((u64)avg_bw + (*agg_avg), (u64)U32_MAX);
> + *agg_peak = max(*agg_peak, peak_bw);
> +
> + return 0;
> +}
> +
> +/*
> + * Memory Controller (MC) has few Memory Clients that are issuing memory
> + * bandwidth allocation requests to the MC interconnect provider. The MC
> + * provider aggregates the requests and then sends the aggregated request
> + * up to the External Memory Controller (EMC) interconnect provider which
> + * re-configures hardware interface to External Memory (EMEM) in accordance
> + * to the required bandwidth. Each MC interconnect node represents an
> + * individual Memory Client.
> + *
> + * Memory interconnect topology:
> + *
> + *   ++
> + * ++||
> + * | TEXSRD +--->+|
> + * ++||
> + *   ||+-++--+
> + *...| MC +--->+ EMC +--->+ EMEM |
> + *   ||+-++--+
> + * ++||
> + * | DISP.. +--->+|
> + * ++||
> + *   ++
> + */
> +static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
> +{
> + struct icc_onecell_data *data;
> + struct icc_node *node;
> + unsigned int num_nodes;
> + unsigned int i;
> + int err;
> +
> + /* older device-trees don't have interconnect properties */
> + if (!of_find_property(mc->dev->of_node, "#interconnect-cells", NULL))
> + return 0;
> +
> + num_nodes = mc->soc->num_clients;
> +
> + data = devm_kzalloc(mc->dev, struct_size(data, nodes, num_nodes),
> + GFP_KERNEL);
> + if (!data)
> + return -ENOMEM;
> +
> + mc->provider.dev = mc->dev;
> + mc->provider.set = tegra_mc_icc_set;

Hmm, maybe the core should not require a set() implementation and we can
just make it optional instead. Then the dummy function would not be needed.

> + mc->provider.data = data;
> + mc->provider.xlate = of_icc_xlate_onecell;
> + mc->provider.aggregate = tegra_mc_icc_aggregate;
> +
> + err = icc_provider_add(&mc->provider);
> + if (err)
> + goto err_msg;

Nit: I am planning to re-organize some of the existing drivers to call
icc_provider_add() after the topology is populated. Could you please move
this after the nodes are created and linked.

> +
> + /* create Memory Controller node */
> + node = icc_node_create(TEGRA_ICC_MC);
> + err = PTR_ERR_OR_ZERO(node);
> + if (err)
> + goto del_provider;
> +
> + node->name = "Memory Controller";
> + icc_node_add(node, &mc->provider);
> +
> + /* link Memory Controller to External Memory Controller */
> + err = icc_link_create(node, TEGRA_ICC_EMC);
> + if (err)
> + goto remove_nodes;
> +
> + for (i = 0; i < num_nodes; i++) {
> + /* create MC client node */
> + node = icc_node_create(mc->soc->clients[i].id);
> + err = PTR_ERR_OR_ZERO(node);
> + if (err)
> + goto remove_nodes;
> +
> + node->name = mc->soc->clients[i].name;
> + icc_node_add(node, &mc->provider);
> +
> + /* link Memory Client to Memory Controller */
> + err = icc_link_create(node, TEGRA_ICC_MC);
> + if (err)
> + goto remove_nodes;
> +
> + data->nodes[i] = node;
> + }

Re: [v2 1/3] drm/msm/dpu: add support for clk and bw scaling for display

2020-04-16 Thread Georgi Djakov
Hi Krishna,

Thanks for the patch!

On 4/2/20 09:52, Krishna Manikandan wrote:
> This change adds support to scale src clk and bandwidth as
> per composition requirements.
> 
> Interconnect registration for bw has been moved to mdp
> device node from mdss to facilitate the scaling.

No Signed-off-by ?

> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c  | 106 
> +
>  drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c |   5 +-
>  drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h |   4 +
>  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c|  37 -
>  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h|   4 +
>  drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c   |   9 ++-
>  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c  |  82 +++
>  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h  |   4 +
>  8 files changed, 228 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c 
> b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c

[..]
> @@ -186,10 +247,21 @@ static int _dpu_core_perf_crtc_update_bus(struct 
> dpu_kms *kms,
>   perf.max_per_pipe_ib = max(perf.max_per_pipe_ib,
>   dpu_cstate->new_perf.max_per_pipe_ib);
>  
> - DPU_DEBUG("crtc=%d bw=%llu\n", tmp_crtc->base.id,
> - dpu_cstate->new_perf.bw_ctl);
> + perf.bw_ctl += dpu_cstate->new_perf.bw_ctl;
> +
> + DPU_DEBUG("crtc=%d bw=%llu paths:%d\n",
> +   tmp_crtc->base.id,
> +   dpu_cstate->new_perf.bw_ctl, kms->num_paths);
>   }
>   }
> +
> + avg_bw = kms->num_paths ?
> + perf.bw_ctl / kms->num_paths : 0;
> +
> + for (i = 0; i < kms->num_paths; i++)
> + icc_set_bw(kms->path[i],
> + Bps_to_icc(avg_bw), (perf.max_per_pipe_ib));

In what units is max_per_pipe_ib? Can you use Bps_to_icc() or KBps_to_icc()?

> +
>   return ret;
>  }
>

[..]

> @@ -1037,8 +1065,15 @@ static int __maybe_unused dpu_runtime_resume(struct 
> device *dev)
>   struct drm_encoder *encoder;
>   struct drm_device *ddev;
>   struct dss_module_power *mp = &dpu_kms->mp;
> + int i;
>  
>   ddev = dpu_kms->dev;
> +
> + /* Min vote of BW is required before turning on AXI clk */
> + for (i = 0; i < dpu_kms->num_paths; i++)
> + icc_set_bw(dpu_kms->path[i], 0,
> + dpu_kms->catalog->perf.min_dram_ib);

Bps_to_icc() ?

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 17/22] memory: tegra30-emc: Register as interconnect provider

2020-04-14 Thread Georgi Djakov
Hi Dmitry,

On 3/30/20 04:08, Dmitry Osipenko wrote:
> Now external memory controller is a memory interconnection provider.
> This allows us to use interconnect API to change memory configuration.
> 
> Signed-off-by: Dmitry Osipenko 
> ---
>  drivers/memory/tegra/tegra30-emc.c | 115 +
>  1 file changed, 115 insertions(+)
> 
> diff --git a/drivers/memory/tegra/tegra30-emc.c 
> b/drivers/memory/tegra/tegra30-emc.c
> index 69698665d431..5a4106173a75 100644
> --- a/drivers/memory/tegra/tegra30-emc.c
> +++ b/drivers/memory/tegra/tegra30-emc.c
> @@ -14,6 +14,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -327,6 +328,7 @@ struct tegra_emc {
>   struct device *dev;
>   struct tegra_mc *mc;
>   struct notifier_block clk_nb;
> + struct icc_provider provider;
>   struct clk *clk;
>   void __iomem *regs;
>   unsigned int irq;
> @@ -1264,6 +1266,112 @@ static void tegra_emc_debugfs_init(struct tegra_emc 
> *emc)
>   emc, &tegra_emc_debug_max_rate_fops);
>  }
>  
> +static inline struct tegra_emc *
> +to_tegra_emc_provider(struct icc_provider *provider)
> +{
> + return container_of(provider, struct tegra_emc, provider);
> +}
> +
> +static struct icc_node *
> +emc_of_icc_xlate_onecell(struct of_phandle_args *spec, void *data)
> +{
> + struct icc_provider *provider = data;
> + struct icc_node *node;
> +
> + /* External Memory is the only possible ICC route */
> + list_for_each_entry(node, &provider->nodes, node_list) {
> + if (node->id == TEGRA_ICC_EMEM)
> + return node;
> + }
> +
> + return ERR_PTR(-EINVAL);
> +}
> +
> +static int emc_icc_set(struct icc_node *src, struct icc_node *dst)
> +{
> + struct tegra_emc *emc = to_tegra_emc_provider(dst->provider);
> + unsigned long long rate = icc_units_to_bps(dst->avg_bw);
> + unsigned int dram_data_bus_width_bytes = 4;
> + unsigned int ddr = 2;
> + int err;
> +
> + do_div(rate, ddr * dram_data_bus_width_bytes);
> + rate = min_t(u64, rate, U32_MAX);
> +
> + err = clk_set_min_rate(emc->clk, rate);
> + if (err)
> + return err;
> +
> + err = clk_set_rate(emc->clk, rate);
> + if (err)
> + return err;
> +
> + return 0;
> +}
> +
> +static int emc_icc_aggregate(struct icc_node *node,
> +  u32 tag, u32 avg_bw, u32 peak_bw,
> +  u32 *agg_avg, u32 *agg_peak)
> +{
> + *agg_avg = min((u64)avg_bw + (*agg_avg), (u64)U32_MAX);
> + *agg_peak = max(*agg_peak, peak_bw);
> +
> + return 0;
> +}
> +
> +static int tegra_emc_interconnect_init(struct tegra_emc *emc)
> +{
> + struct icc_node *node;
> + int err;
> +
> + /* older device-trees don't have interconnect properties */
> + if (!of_find_property(emc->dev->of_node, "#interconnect-cells", NULL))
> + return 0;
> +
> + emc->provider.dev = emc->dev;
> + emc->provider.set = emc_icc_set;
> + emc->provider.data = &emc->provider;
> + emc->provider.xlate = emc_of_icc_xlate_onecell;
> + emc->provider.aggregate = emc_icc_aggregate;
> +
> + err = icc_provider_add(&emc->provider);
> + if (err)
> + return err;
> +
> + /* create External Memory Controller node */
> + node = icc_node_create(TEGRA_ICC_EMC);
> + err = PTR_ERR_OR_ZERO(node);
> + if (err)
> + goto del_provider;
> +
> + node->name = "External Memory Controller";
> + icc_node_add(node, &emc->provider);
> +
> + /* link External Memory Controller to External Memory (DRAM) */
> + err = icc_link_create(node, TEGRA_ICC_EMEM);
> + if (err)
> + goto remove_nodes;
> +
> + /* create External Memory node */
> + node = icc_node_create(TEGRA_ICC_EMEM);
> + err = PTR_ERR_OR_ZERO(node);
> + if (err)
> + goto remove_nodes;
> +
> + node->name = "External Memory (DRAM)";
> + icc_node_add(node, &emc->provider);
> +
> + return 0;
> +
> +remove_nodes:
> + icc_nodes_remove(&emc->provider);
> +
> +del_provider:
> + icc_provider_del(&emc->provider);
> +
> + return err;
> +}

All the above seems like a duplicate of what we already have in the previous
patch for tegra20-emc. Can we have a single driver for both? Maybe extract the
above as a separate interconnect provider driver.

> +
>  static int tegra_emc_probe(struct platform_device *pdev)
>  {
>   struct platform_device *mc;
> @@ -1344,6 +1452,13 @@ static int tegra_emc_probe(struct platform_device 
> *pdev)
>   platform_set_drvdata(pdev, emc);
>   tegra_emc_debugfs_init(emc);
>  
> + if (IS_ENABLED(CONFIG_INTERCONNECT)) {
> + err = tegra_emc_interconnect_init(emc);

How about registering a platform device that will use the same driver to handle
the interconnect functionality for both tegra20 and tegra30?

Thanks,
Georgi
___

Re: [PATCH v2 11/22] memory: tegra: Register as interconnect provider

2020-04-14 Thread Georgi Djakov
Hi Dmitry,

Thank you for the patchset!

On 3/30/20 04:08, Dmitry Osipenko wrote:
> Now memory controller is a memory interconnection provider. This allows us
> to use interconnect API in order to change memory configuration.
> 
> Signed-off-by: Dmitry Osipenko 
> ---
>  drivers/memory/tegra/mc.c | 118 ++
>  drivers/memory/tegra/mc.h |   8 +++
>  include/soc/tegra/mc.h|   3 +
>  3 files changed, 129 insertions(+)
> 
> diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
> index ec8403557ed4..bcf0478c5f5a 100644
> --- a/drivers/memory/tegra/mc.c
> +++ b/drivers/memory/tegra/mc.c
> @@ -591,6 +591,117 @@ static __maybe_unused irqreturn_t tegra20_mc_irq(int 
> irq, void *data)
>   return IRQ_HANDLED;
>  }
>  
> +static int tegra_mc_icc_set(struct icc_node *src, struct icc_node *dst)
> +{
> + return 0;
> +}
> +
> +static int tegra_mc_icc_aggregate(struct icc_node *node,
> +   u32 tag, u32 avg_bw, u32 peak_bw,
> +   u32 *agg_avg, u32 *agg_peak)
> +{
> + *agg_avg = min((u64)avg_bw + (*agg_avg), (u64)U32_MAX);
> + *agg_peak = max(*agg_peak, peak_bw);
> +
> + return 0;
> +}
> +
> +/*
> + * Memory Controller (MC) has few Memory Clients that are issuing memory
> + * bandwidth allocation requests to the MC interconnect provider. The MC
> + * provider aggregates the requests and then sends the aggregated request
> + * up to the External Memory Controller (EMC) interconnect provider which
> + * re-configures hardware interface to External Memory (EMEM) in accordance
> + * to the required bandwidth. Each MC interconnect node represents an
> + * individual Memory Client.
> + *
> + * Memory interconnect topology:
> + *
> + *   ++
> + * ++||
> + * | TEXSRD +--->+|
> + * ++||
> + *   ||+-++--+
> + *...| MC +--->+ EMC +--->+ EMEM |
> + *   ||+-++--+
> + * ++||
> + * | DISP.. +--->+|
> + * ++||
> + *   ++
> + */
> +static int tegra_mc_interconnect_setup(struct tegra_mc *mc)
> +{
> + struct icc_onecell_data *data;
> + struct icc_node *node;
> + unsigned int num_nodes;
> + unsigned int i;
> + int err;
> +
> + /* older device-trees don't have interconnect properties */
> + if (!of_find_property(mc->dev->of_node, "#interconnect-cells", NULL))
> + return 0;
> +
> + num_nodes = mc->soc->num_clients;
> +
> + data = devm_kzalloc(mc->dev, struct_size(data, nodes, num_nodes),
> + GFP_KERNEL);
> + if (!data)
> + return -ENOMEM;
> +
> + mc->provider.dev = mc->dev;
> + mc->provider.set = tegra_mc_icc_set;
> + mc->provider.data = data;
> + mc->provider.xlate = of_icc_xlate_onecell;
> + mc->provider.aggregate = tegra_mc_icc_aggregate;
> +
> + err = icc_provider_add(&mc->provider);
> + if (err)
> + return err;
> +
> + /* create Memory Controller node */
> + node = icc_node_create(TEGRA_ICC_MC);
> + err = PTR_ERR_OR_ZERO(node);
> + if (err)
> + goto del_provider;
> +
> + node->name = "Memory Controller";
> + icc_node_add(node, &mc->provider);
> +
> + /* link Memory Controller to External Memory Controller */
> + err = icc_link_create(node, TEGRA_ICC_EMC);
> + if (err)
> + goto remove_nodes;
> +
> + for (i = 0; i < num_nodes; i++) {
> + /* create MC client node */
> + node = icc_node_create(mc->soc->clients[i].id);
> + err = PTR_ERR_OR_ZERO(node);
> + if (err)
> + goto remove_nodes;
> +
> + node->name = mc->soc->clients[i].name;
> + icc_node_add(node, &mc->provider);
> +
> + /* link Memory Client to Memory Controller */
> + err = icc_link_create(node, TEGRA_ICC_MC);
> + if (err)
> + goto remove_nodes;
> +
> + data->nodes[i] = node;
> + }
> + data->num_nodes = num_nodes;
> +
> + return 0;
> +
> +remove_nodes:
> + icc_nodes_remove(&mc->provider);
> +
> +del_provider:
> + icc_provider_del(&mc->provider);
> +
> + return err;
> +}
> +
>  static int tegra_mc_probe(struct platform_device *pdev)
>  {
>   struct resource *res;
> @@ -699,6 +810,13 @@ static int tegra_mc_probe(struct platform_device *pdev)
>   }
>   }
>  
> + if (IS_ENABLED(CONFIG_INTERCONNECT)) {

The interconnect framework can be also a module and the then the build will 
fail.

> + err = tegra_mc_interconnect_setup(mc);

Maybe register the interconnect provider as a platform sub-device instead?

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH v3 5/7] devfreq: exynos-bus: Add interconnect functionality to exynos-bus

2020-01-25 Thread Georgi Djakov
Hi Artur,

On 1/24/20 13:22, Artur Świgoń wrote:
> Hi Georgi,
> 
> On Wed, 2020-01-22 at 19:02 +0200, Georgi Djakov wrote:
>> Hi Artur,
>>
>> On 12/20/19 13:56, Artur Świgoń wrote:
>>> This patch adds interconnect functionality to the exynos-bus devfreq
>>> driver.
>>>
>>> The SoC topology is a graph (or, more specifically, a tree) and its
>>> edges are specified using the 'exynos,interconnect-parent-node' in the
>>> DT. Due to unspecified relative probing order, -EPROBE_DEFER may be
>>> propagated to ensure that the parent is probed before its children.
>>>
>>> Each bus is now an interconnect provider and an interconnect node as well
>>> (cf. Documentation/interconnect/interconnect.rst), i.e. every bus registers
>>> itself as a node. Node IDs are not hardcoded but rather assigned at
>>
>> Just to note that usually the provider consists of multiple nodes and each 
>> node
>> represents a single master or slave port on the AXI bus for example. I am not
>> sure whether this represents correctly the Exynos hardware, so it's up to
>> you.
>>
>>> runtime, in probing order (subject to the above-mentioned exception
>>> regarding relative order). This approach allows for using this driver with
>>> various Exynos SoCs.
>>
>> This sounds good. I am wondering whether such dynamic probing would be useful
>> for other platforms too. Then maybe it would make sense to even have a 
>> common DT
>> property, but we will see.
>>
>> Is this going to be used only together with devfreq?
> 
> Yes, this functions solely as an extension to devfreq, hence the slightly
> unusual architecture (one icc_provider/icc_node per devfreq).

Ok, thanks for clarifying.

> (Compared to a singleton icc_provider, this approach yields less code with
> a very simple xlate()).
> 
> With exactly one icc_node for every devfreq device, I think I will actually
> reuse the devfreq ID (as seen in the device name, e.g. the "3" in "devfreq3")
> for the node ID. The devfreq framework already does the dynamic numbering
> thing that I do in this patch using IDR.
> 

Sounds good.

>>> Frequencies requested via the interconnect API for a given node are
>>> propagated to devfreq using dev_pm_qos_update_request(). Please note that
>>> it is not an error when CONFIG_INTERCONNECT is 'n', in which case all
>>> interconnect API functions are no-op.
>>
>> How about the case where CONFIG_INTERCONNECT=m. Looks like the build will 
>> fail
>> if CONFIG_ARM_EXYNOS_BUS_DEVFREQ=y, so this dependency should be expressed in
>> Kconfig.
> 
> I think adding:
>   depends on INTERCONNECT || !INTERCONNECT

Yes, exactly.

> under ARM_EXYNOS_BUS_DEVFREQ does the trick.
> 
>>>
>>> Signed-off-by: Artur Świgoń 
>>> ---
>>>  drivers/devfreq/exynos-bus.c | 144 +++
>>>  1 file changed, 144 insertions(+)
>>>
>>> diff --git a/drivers/devfreq/exynos-bus.c b/drivers/devfreq/exynos-bus.c
>>> index 9fdb188915e8..694a9581dcdb 100644
>>> --- a/drivers/devfreq/exynos-bus.c
>>> +++ b/drivers/devfreq/exynos-bus.c
>>> @@ -14,14 +14,19 @@
>>>  #include 
>>>  #include 
>>>  #include 
>>> +#include 
>>> +#include 
>>>  #include 
>>>  #include 
>>>  #include 
>>> +#include 
>>>  #include 
>>>  #include 
>>>  
>>>  #define DEFAULT_SATURATION_RATIO   40
>>>  
>>> +#define kbps_to_khz(x) ((x) / 8)
>>> +
>>>  struct exynos_bus {
>>> struct device *dev;
>>>  
>>> @@ -35,6 +40,12 @@ struct exynos_bus {
>>> struct opp_table *opp_table;
>>> struct clk *clk;
>>> unsigned int ratio;
>>> +
>>> +   /* One provider per bus, one node per provider */
>>> +   struct icc_provider provider;
>>> +   struct icc_node *node;
>>> +
>>> +   struct dev_pm_qos_request qos_req;
>>>  };
>>>  
>>>  /*
>>> @@ -205,6 +216,39 @@ static void exynos_bus_passive_exit(struct device *dev)
>>> clk_disable_unprepare(bus->clk);
>>>  }
>>>  
>>> +static int exynos_bus_icc_set(struct icc_node *src, struct icc_node *dst)
>>> +{
>>> +   struct exynos_bus *src_bus = src->data, *dst_bus = dst->data;
>>> +   s32 src_freq = kbps_to_khz(src->avg_bw);
>>> +   s32 dst_freq = kbps_to_khz(dst->avg_bw);
>>> +   int ret;
&g

Re: [RFC PATCH v3 4/7] arm: dts: exynos: Add interconnect bindings for Exynos4412

2020-01-23 Thread Georgi Djakov
Hi Artur,

Thank you for your continuous work on this.

On 12/20/19 13:56, Artur Świgoń wrote:
> This patch adds the following properties to the Exynos4412 DT:
>   - exynos,interconnect-parent-node: to declare connections between
> nodes in order to guarantee PM QoS requirements between nodes;

Is this DT property documented somewhere? I believe that there should be a patch
to document it somewhere in Documentation/devicetree/bindings/ before using it.

Thanks,
Georgi

>   - #interconnect-cells: required by the interconnect framework.
> 
> Note that #interconnect-cells is always zero and node IDs are not
> hardcoded anywhere.
> 
> Signed-off-by: Artur Świgoń 
> ---
>  arch/arm/boot/dts/exynos4412-odroid-common.dtsi | 5 +
>  1 file changed, 5 insertions(+)
> 
> diff --git a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi 
> b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
> index 4ce3d77a6704..d9d70eacfcaf 100644
> --- a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
> +++ b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
> @@ -90,6 +90,7 @@
>  &bus_dmc {
>   exynos,ppmu-device = <&ppmu_dmc0_3>, <&ppmu_dmc1_3>;
>   vdd-supply = <&buck1_reg>;
> + #interconnect-cells = <0>;
>   status = "okay";
>  };
>  
> @@ -106,6 +107,8 @@
>  &bus_leftbus {
>   exynos,ppmu-device = <&ppmu_leftbus_3>, <&ppmu_rightbus_3>;
>   vdd-supply = <&buck3_reg>;
> + exynos,interconnect-parent-node = <&bus_dmc>;
> + #interconnect-cells = <0>;
>   status = "okay";
>  };
>  
> @@ -116,6 +119,8 @@
>  
>  &bus_display {
>   exynos,parent-bus = <&bus_leftbus>;
> + exynos,interconnect-parent-node = <&bus_leftbus>;
> + #interconnect-cells = <0>;
>   status = "okay";
>  };
>  
> 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH v3 5/7] devfreq: exynos-bus: Add interconnect functionality to exynos-bus

2020-01-23 Thread Georgi Djakov
Hi Artur,

On 12/20/19 13:56, Artur Świgoń wrote:
> This patch adds interconnect functionality to the exynos-bus devfreq
> driver.
> 
> The SoC topology is a graph (or, more specifically, a tree) and its
> edges are specified using the 'exynos,interconnect-parent-node' in the
> DT. Due to unspecified relative probing order, -EPROBE_DEFER may be
> propagated to ensure that the parent is probed before its children.
> 
> Each bus is now an interconnect provider and an interconnect node as well
> (cf. Documentation/interconnect/interconnect.rst), i.e. every bus registers
> itself as a node. Node IDs are not hardcoded but rather assigned at

Just to note that usually the provider consists of multiple nodes and each node
represents a single master or slave port on the AXI bus for example. I am not
sure whether this represents correctly the Exynos hardware, so it's up to
you.

> runtime, in probing order (subject to the above-mentioned exception
> regarding relative order). This approach allows for using this driver with
> various Exynos SoCs.

This sounds good. I am wondering whether such dynamic probing would be useful
for other platforms too. Then maybe it would make sense to even have a common DT
property, but we will see.

Is this going to be used only together with devfreq?

> Frequencies requested via the interconnect API for a given node are
> propagated to devfreq using dev_pm_qos_update_request(). Please note that
> it is not an error when CONFIG_INTERCONNECT is 'n', in which case all
> interconnect API functions are no-op.

How about the case where CONFIG_INTERCONNECT=m. Looks like the build will fail
if CONFIG_ARM_EXYNOS_BUS_DEVFREQ=y, so this dependency should be expressed in
Kconfig.

Thanks,
Georgi

> 
> Signed-off-by: Artur Świgoń 
> ---
>  drivers/devfreq/exynos-bus.c | 144 +++
>  1 file changed, 144 insertions(+)
> 
> diff --git a/drivers/devfreq/exynos-bus.c b/drivers/devfreq/exynos-bus.c
> index 9fdb188915e8..694a9581dcdb 100644
> --- a/drivers/devfreq/exynos-bus.c
> +++ b/drivers/devfreq/exynos-bus.c
> @@ -14,14 +14,19 @@
>  #include 
>  #include 
>  #include 
> +#include 
> +#include 
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  
>  #define DEFAULT_SATURATION_RATIO 40
>  
> +#define kbps_to_khz(x) ((x) / 8)
> +
>  struct exynos_bus {
>   struct device *dev;
>  
> @@ -35,6 +40,12 @@ struct exynos_bus {
>   struct opp_table *opp_table;
>   struct clk *clk;
>   unsigned int ratio;
> +
> + /* One provider per bus, one node per provider */
> + struct icc_provider provider;
> + struct icc_node *node;
> +
> + struct dev_pm_qos_request qos_req;
>  };
>  
>  /*
> @@ -205,6 +216,39 @@ static void exynos_bus_passive_exit(struct device *dev)
>   clk_disable_unprepare(bus->clk);
>  }
>  
> +static int exynos_bus_icc_set(struct icc_node *src, struct icc_node *dst)
> +{
> + struct exynos_bus *src_bus = src->data, *dst_bus = dst->data;
> + s32 src_freq = kbps_to_khz(src->avg_bw);
> + s32 dst_freq = kbps_to_khz(dst->avg_bw);
> + int ret;
> +
> + ret = dev_pm_qos_update_request(&src_bus->qos_req, src_freq);
> + if (ret < 0) {
> + dev_err(src_bus->dev, "failed to update PM QoS request");
> + return ret;
> + }
> +
> + ret = dev_pm_qos_update_request(&dst_bus->qos_req, dst_freq);
> + if (ret < 0) {
> + dev_err(dst_bus->dev, "failed to update PM QoS request");
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static struct icc_node *exynos_bus_icc_xlate(struct of_phandle_args *spec,
> +  void *data)
> +{
> + struct exynos_bus *bus = data;
> +
> + if (spec->np != bus->dev->of_node)
> + return ERR_PTR(-EINVAL);
> +
> + return bus->node;
> +}
> +
>  static int exynos_bus_parent_parse_of(struct device_node *np,
>   struct exynos_bus *bus)
>  {
> @@ -419,6 +463,96 @@ static int exynos_bus_profile_init_passive(struct 
> exynos_bus *bus,
>   return 0;
>  }
>  
> +static struct icc_node *exynos_bus_icc_get_parent(struct exynos_bus *bus)
> +{
> + struct device_node *np = bus->dev->of_node;
> + struct of_phandle_args args;
> + int num, ret;
> +
> + num = of_count_phandle_with_args(np, "exynos,interconnect-parent-node",
> + "#interconnect-cells");
> + if (num != 1)
> + return NULL; /* parent nodes are optional */
> +
> + ret = of_parse_phandle_with_args(np, "exynos,interconnect-parent-node",
> + "#interconnect-cells", 0, &args);
> + if (ret < 0)
> + return ERR_PTR(ret);
> +
> + of_node_put(args.np);
> +
> + return of_icc_get_from_provider(&args);
> +}
> +
> +static int exynos_bus_icc_init(struct exynos_bus *bus)
> +{
> + static DEFINE_IDA(ida);
> +
> + struct device *dev = bus->dev;
> +

Re: [RFC PATCH 09/11] devfreq: exynos-bus: Add interconnect functionality to exynos-bus

2019-08-07 Thread Georgi Djakov
Hi Artur,

On 8/1/19 10:59, Artur Świgoń wrote:
> Hi Georgi,
> 
> On Fri, 2019-07-26 at 11:05 +0300, Georgi Djakov wrote:
>> Hi Artur,
>>
>> On 7/23/19 15:20, Artur Świgoń wrote:
>>> This patch adds interconnect functionality to the exynos-bus devfreq
>>> driver.
>>>
>>> The SoC topology is a graph (or, more specifically, a tree) and most of its
>>> edges are taken from the devfreq parent-child hierarchy (cf.
>>> Documentation/devicetree/bindings/devfreq/exynos-bus.txt). The previous
>>> patch adds missing edges to the DT (under the name 'parent'). Due to
>>> unspecified relative probing order, -EPROBE_DEFER may be propagated to
>>> guarantee that a child is probed before its parent.
>>>
>>> Each bus is now an interconnect provider and an interconnect node as well
>>> (cf. Documentation/interconnect/interconnect.rst), i.e. every bus registers
>>> itself as a node. Node IDs are not hardcoded but rather assigned at
>>> runtime, in probing order (subject to the above-mentioned exception
>>> regarding relative order). This approach allows for using this driver with
>>> various Exynos SoCs.
>>
>> I am not familiar with the Exynos bus topology, but it seems to me that it's 
>> not
>> represented correctly. An interconnect provider with just a single node 
>> (port)
>> is odd. I would expect that each provider consists of multiple master and 
>> slave
>> nodes. This data would be used by a framework to understand what are the 
>> links
>> and how the traffic flows between the IP blocks and through which buses.
> 
> To summarize the exynos-bus topology[1] used by the devfreq driver: There are
> many data buses for data transfer in Samsung Exynos SoC. Every bus has its own
> clock. Buses often share power lines, in which case one of the buses on the
> power line is referred to as 'parent' (or as 'devfreq' in the DT). In the
> particular case of Exynos4412[1][2], the topology can be expressed as follows:
> 
> bus_dmc
> -- bus_acp
> -- bus_c2c
> 
> bus_leftbus
> -- bus_rightbus
> -- bus_display
> -- bus_fsys
> -- bus_peri
> -- bus_mfc
> 
> Where bus_dmc and bus_leftbus probably could be referred to as masters, and 
> the
> following indented nodes as slaves. Patch 08/11 of this RFC additionally adds
> the following to the DT:
> 
> bus_dmc
> -- bus_leftbus
> 
> Which makes the topology a valid tree.
> 
> The exynos-bus concept in devfreq[3] is designed in such a way that every bus 
> is
> probed separately as a platform device, and is a largely independent entity.
>
> This RFC proposes an extension to the existing devfreq driver that basically
> provides a simple QoS to ensure minimum clock frequency for selected buses
> (possibly overriding devfreq governor calculations) using the interconnect
> framework.
> 
> The hierarchy is modelled in such a way that every bus is an interconnect 
> node.
> On the other hand, what is considered an interconnect provider here is quite
> arbitrary, but for the reasons mentioned in the above paragraph, this RFC
> assumes that every bus is a provider of itself as a node. Using an alternative

IIUC, in case we want to transfer data between the display and the memory
controller, the path would look like this:

display --> bus_display --> bus_leftbus --> bus_dmc --> memory

But the bus_display for example would have not one, but two nodes (ports),
right?  One representing the link to the display controller and another one
representing the link to bus_leftbus? So i think that all the buses should
have at least two nodes, to represent each end of the wire.

> singleton provider approach was deemed more complicated since the 'dev' field 
> in
> 'struct icc_provider' has to be set to something meaningful and we are tied to
> the 'samsung,exynos-bus' compatible string in the driver (and multiple 
> instances
> of exynos-bus probed in indeterminate relative order).
> 

Sure, the rest makes sense to me.

Thanks,
Georgi


Re: [RFC PATCH 09/11] devfreq: exynos-bus: Add interconnect functionality to exynos-bus

2019-07-26 Thread Georgi Djakov
Hi Artur,

On 7/23/19 15:20, Artur Świgoń wrote:
> This patch adds interconnect functionality to the exynos-bus devfreq
> driver.
> 
> The SoC topology is a graph (or, more specifically, a tree) and most of its
> edges are taken from the devfreq parent-child hierarchy (cf.
> Documentation/devicetree/bindings/devfreq/exynos-bus.txt). The previous
> patch adds missing edges to the DT (under the name 'parent'). Due to
> unspecified relative probing order, -EPROBE_DEFER may be propagated to
> guarantee that a child is probed before its parent.
> 
> Each bus is now an interconnect provider and an interconnect node as well
> (cf. Documentation/interconnect/interconnect.rst), i.e. every bus registers
> itself as a node. Node IDs are not hardcoded but rather assigned at
> runtime, in probing order (subject to the above-mentioned exception
> regarding relative order). This approach allows for using this driver with
> various Exynos SoCs.

I am not familiar with the Exynos bus topology, but it seems to me that it's not
represented correctly. An interconnect provider with just a single node (port)
is odd. I would expect that each provider consists of multiple master and slave
nodes. This data would be used by a framework to understand what are the links
and how the traffic flows between the IP blocks and through which buses.

> The devfreq target() callback provided by exynos-bus now selects either the
> frequency calculated by the devfreq governor or the frequency requested via
> the interconnect API for the given node, whichever is higher.

This completely makes sense. We just need to be sure that the interconnect
framework is used correctly.

Thanks,
Georgi


Re: [PATCH 5/5] drm/msm/mdp5: Use the interconnect API

2019-05-30 Thread Georgi Djakov
On 5/8/19 23:42, Rob Clark wrote:
> From: Georgi Djakov 
> 

Let's put some text in the commit message:

The interconnect API provides an interface for consumer drivers to express
their bandwidth needs in the SoC. This data is aggregated and the on-chip
interconnect hardware is configured to the most appropriate power/performance
profile.

Use the API to configure the interconnects and request bandwidth between DDR and
the display hardware (MDP port(s) and rotator downscaler).


> Signed-off-by: Georgi Djakov 
> Signed-off-by: Rob Clark 
> ---
>  drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 14 ++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c 
> b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
> index 97179bec8902..54d2b4c2b09f 100644
> --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
> +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
> @@ -16,6 +16,7 @@
>   * this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include 
>  #include 
>  
>  #include "msm_drv.h"
> @@ -1050,6 +1051,19 @@ static const struct component_ops mdp5_ops = {
>  
>  static int mdp5_dev_probe(struct platform_device *pdev)
>  {
> + struct icc_path *path0 = of_icc_get(&pdev->dev, "port0");
> + struct icc_path *path1 = of_icc_get(&pdev->dev, "port1");
> + struct icc_path *path_rot = of_icc_get(&pdev->dev, "rotator");

It would be better change just the names to "mdp0-mem', "mdp1-mem",
"rotator-mem" for consistency and denote that the path target is the DDR memory.

> +
> + if (IS_ERR(path0))
> + return PTR_ERR(path0);
> + icc_set_bw(path0, 0, MBps_to_icc(6400));
> +
> + if (!IS_ERR(path1))
> + icc_set_bw(path1, 0, MBps_to_icc(6400));
> + if (!IS_ERR(path_rot))
> + icc_set_bw(path_rot, 0, MBps_to_icc(6400));
> +
>   DBG("");
>   return component_add(&pdev->dev, &mdp5_ops);
>  }
> 

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH 2/5] drm/msm/dpu: Integrate interconnect API in MDSS

2019-05-29 Thread Georgi Djakov
On 5/13/19 17:47, Sean Paul wrote:
> On Wed, May 08, 2019 at 01:42:12PM -0700, Rob Clark wrote:
>> From: Jayant Shekhar 
>>
>> The interconnect framework is designed to provide a
>> standard kernel interface to control the settings of
>> the interconnects on a SoC.
>>
>> The interconnect API uses a consumer/provider-based model,
>> where the providers are the interconnect buses and the
>> consumers could be various drivers.
>>
>> MDSS is one of the interconnect consumers which uses the
>> interconnect APIs to get the path between endpoints and
>> set its bandwidth requirement for the given interconnected
>> path.
>>
>> Changes in v2:
>>  - Remove error log and unnecessary check (Jordan Crouse)
>>
>> Changes in v3:
>>  - Code clean involving variable name change, removal
>>of extra paranthesis and variables (Matthias Kaehlcke)
>>
>> Changes in v4:
>>  - Add comments, spacings, tabs, proper port name
>>and icc macro (Georgi Djakov)
>>
>> Changes in v5:
>>  - Commit text and parenthesis alignment (Georgi Djakov)
>>
>> Changes in v6:
>>  - Change to new icc_set API's (Doug Anderson)
>>
>> Changes in v7:
>>  - Fixed a typo
>>
>> Signed-off-by: Sravanthi Kollukuduru 
>> Signed-off-by: Jayant Shekhar 
>> Signed-off-by: Rob Clark 
>> ---
>>  drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c | 49 ++--
>>  1 file changed, 45 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c 
>> b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
>> index 7316b4ab1b85..e3c56ccd7357 100644
>> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
>> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
>> @@ -4,11 +4,15 @@
>>   */
>>  
>>  #include "dpu_kms.h"
>> +#include 
>>  
>>  #define to_dpu_mdss(x) container_of(x, struct dpu_mdss, base)
>>  
>>  #define HW_INTR_STATUS  0x0010
>>  
>> +/* Max BW defined in KBps */
>> +#define MAX_BW  680
>> +
>>  struct dpu_irq_controller {
>>  unsigned long enabled_mask;
>>  struct irq_domain *domain;
>> @@ -21,8 +25,30 @@ struct dpu_mdss {
>>  u32 hwversion;
>>  struct dss_module_power mp;
>>  struct dpu_irq_controller irq_controller;
>> +struct icc_path *path[2];
>> +u32 num_paths;
>>  };
>>  
>> +static int dpu_mdss_parse_data_bus_icc_path(struct drm_device *dev,
>> +struct dpu_mdss *dpu_mdss)
>> +{
>> +struct icc_path *path0 = of_icc_get(dev->dev, "mdp0-mem");
>> +struct icc_path *path1 = of_icc_get(dev->dev, "mdp1-mem");
>> +
>> +if (IS_ERR(path0))
> 
> of_icc_get can also return NULL, it looks like we also want to guard against
> this case and keep num_paths == 0.

of_icc_get() returns NULL when the interconnect API is not enabled or the DT
properties are not present. In this case, passing a NULL path to the icc
functions is just a nop. So it should be fine either way.

Acked-by: Georgi Djakov 

Thanks,
Georgi


Re: [PATCH v4 1/7] dt-bindings: interconnect: Add a dma interconnect name

2019-03-24 Thread Georgi Djakov
On 3/14/19 22:26, Maxime Ripard wrote:
> The current DT bindings assume that the DMA will be performed by the
> devices through their parent DT node, and rely on that assumption for the
> address translation using dma-ranges.
> 
> However, some SoCs have devices that will perform DMA through another bus,
> with separate address translation rules. We therefore need to express that
> relationship, through the special interconnect name "dma".

s/dma/dma-mem/

> 
> Signed-off-by: Maxime Ripard 

Acked-by: Georgi Djakov 

> ---
>  Documentation/devicetree/bindings/interconnect/interconnect.txt | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt 
> b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> index 5a3c575b387a..6f5d23a605b7 100644
> --- a/Documentation/devicetree/bindings/interconnect/interconnect.txt
> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> @@ -51,6 +51,10 @@ interconnect-names : List of interconnect path name 
> strings sorted in the same
>interconnect-names to match interconnect paths with 
> interconnect
>specifier pairs.
>  
> + Reserved interconnect names:
> +  * dma-mem: Path from the device to the main memory of
> + the system
> +
>  Example:
>  
>   sdhci@7864000 {
> 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v3 1/7] dt-bindings: interconnect: Add a dma interconnect name

2019-03-14 Thread Georgi Djakov
Hi,

On 3/11/19 12:11, Maxime Ripard wrote:
> On Fri, Mar 08, 2019 at 12:09:47AM +0800, Chen-Yu Tsai wrote:
>> On Thu, Mar 7, 2019 at 11:48 PM Maxime Ripard  
>> wrote:
>>>
>>> On Thu, Mar 07, 2019 at 05:15:20PM +0200, Georgi Djakov wrote:
>>>> Hi,
>>>>
>>>> On 3/5/19 18:14, Robin Murphy wrote:
>>>>> On 05/03/2019 15:53, Maxime Ripard wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On Fri, Mar 01, 2019 at 07:48:15PM +0200, Georgi Djakov wrote:
>>>>>>> On 2/11/19 17:02, Maxime Ripard wrote:
>>>>>>>> The current DT bindings assume that the DMA will be performed by the
>>>>>>>> devices through their parent DT node, and rely on that assumption
>>>>>>>> for the
>>>>>>>> address translation using dma-ranges.
>>>>>>>>
>>>>>>>> However, some SoCs have devices that will perform DMA through
>>>>>>>> another bus,
>>>>>>>> with separate address translation rules. We therefore need to
>>>>>>>> express that
>>>>>>>> relationship, through the special interconnect name "dma".
>>>>>>>>
>>>>>>>> Signed-off-by: Maxime Ripard 
>>>>>>>> ---
>>>>>>>>   Documentation/devicetree/bindings/interconnect/interconnect.txt |
>>>>>>>> 3 +++
>>>>>>>>   1 file changed, 3 insertions(+)
>>>>>>>>
>>>>>>>> diff --git
>>>>>>>> a/Documentation/devicetree/bindings/interconnect/interconnect.txt
>>>>>>>> b/Documentation/devicetree/bindings/interconnect/interconnect.txt
>>>>>>>> index 5a3c575b387a..e69fc2d992c3 100644
>>>>>>>> --- a/Documentation/devicetree/bindings/interconnect/interconnect.txt
>>>>>>>> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
>>>>>>>> @@ -51,6 +51,9 @@ interconnect-names : List of interconnect path
>>>>>>>> name strings sorted in the same
>>>>>>>>interconnect-names to match interconnect paths with
>>>>>>>> interconnect
>>>>>>>>specifier pairs.
>>>>>>>>   + Reserved interconnect names:
>>>>>>>> + * dma: Path from the device to the main
>>>>>>>> memory of the system
>>>>>>>
>>>>>>> Bikeshed: As it's from the device to the main memory, maybe here we can
>>>>>>> also denote this my calling the path dma-mem or dma-memory. For other
>>>>>>> paths, we are trying to mention both the source and the destination and
>>>>>>> maybe it would be good to be consistent although this is special one.
>>>>>>
>>>>>> I'm not sure I understand what you mean. You'd like two interconnect
>>>>>> names, one called dma-memory, and one memory-dma?
>>>>>
>>>>> Hmm, yes, it's not like "dma" describes an actual source or destination
>>>>> either :/
>>>>
>>>> IIUC, it's a path (bus) that a dma device use to access some memory
>>>> (read or/and write). So i have used source-destination above more in the
>>>> sense of initiator-target or master-slave. My suggestion was just to
>>>> change the reserved interconnect name from "dma" to "dma-mem" or
>>>> "dma-memory".
>>>
>>> If dma is an issue in itself, maybe we can call it "device-memory" ?
>>
>> Might I ask what happens if the device can both DMA to and from memory?
> 
> We can create another one called memory-device if that's needed?

I don't think this is needed if in both cases the DMA device is the
initiator of the transfer. In both cases the DMA device is the initiator
(master), so memory-device or memory-dma does not make much sense. Sorry
for the confusion.

> 
>> IIRC the display frontends, backends, and mixers all have writeback
>> capability, using the same interconnect port.
> 
> I think in both cases it's the same path. The camera driver also need
> to have that offset, even though it's a producer and not a consumer,
> and the VPU does too.

Again, the camera driver and probably the VPU too would be the
initiators, so regarding naming it should be the same (despite that the
data flow is bi-directional).

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v3 1/7] dt-bindings: interconnect: Add a dma interconnect name

2019-03-07 Thread Georgi Djakov
Hi,

On 3/5/19 18:14, Robin Murphy wrote:
> On 05/03/2019 15:53, Maxime Ripard wrote:
>> Hi,
>>
>> On Fri, Mar 01, 2019 at 07:48:15PM +0200, Georgi Djakov wrote:
>>> On 2/11/19 17:02, Maxime Ripard wrote:
>>>> The current DT bindings assume that the DMA will be performed by the
>>>> devices through their parent DT node, and rely on that assumption
>>>> for the
>>>> address translation using dma-ranges.
>>>>
>>>> However, some SoCs have devices that will perform DMA through
>>>> another bus,
>>>> with separate address translation rules. We therefore need to
>>>> express that
>>>> relationship, through the special interconnect name "dma".
>>>>
>>>> Signed-off-by: Maxime Ripard 
>>>> ---
>>>>   Documentation/devicetree/bindings/interconnect/interconnect.txt |
>>>> 3 +++
>>>>   1 file changed, 3 insertions(+)
>>>>
>>>> diff --git
>>>> a/Documentation/devicetree/bindings/interconnect/interconnect.txt
>>>> b/Documentation/devicetree/bindings/interconnect/interconnect.txt
>>>> index 5a3c575b387a..e69fc2d992c3 100644
>>>> --- a/Documentation/devicetree/bindings/interconnect/interconnect.txt
>>>> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
>>>> @@ -51,6 +51,9 @@ interconnect-names : List of interconnect path
>>>> name strings sorted in the same
>>>>    interconnect-names to match interconnect paths with
>>>> interconnect
>>>>    specifier pairs.
>>>>   + Reserved interconnect names:
>>>> + * dma: Path from the device to the main
>>>> memory of the system
>>>
>>> Bikeshed: As it's from the device to the main memory, maybe here we can
>>> also denote this my calling the path dma-mem or dma-memory. For other
>>> paths, we are trying to mention both the source and the destination and
>>> maybe it would be good to be consistent although this is special one.
>>
>> I'm not sure I understand what you mean. You'd like two interconnect
>> names, one called dma-memory, and one memory-dma?
> 
> Hmm, yes, it's not like "dma" describes an actual source or destination
> either :/

IIUC, it's a path (bus) that a dma device use to access some memory
(read or/and write). So i have used source-destination above more in the
sense of initiator-target or master-slave. My suggestion was just to
change the reserved interconnect name from "dma" to "dma-mem" or
"dma-memory".

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v3 1/7] dt-bindings: interconnect: Add a dma interconnect name

2019-03-03 Thread Georgi Djakov
Hi Maxime,

On 2/11/19 17:02, Maxime Ripard wrote:
> The current DT bindings assume that the DMA will be performed by the
> devices through their parent DT node, and rely on that assumption for the
> address translation using dma-ranges.
> 
> However, some SoCs have devices that will perform DMA through another bus,
> with separate address translation rules. We therefore need to express that
> relationship, through the special interconnect name "dma".
> 
> Signed-off-by: Maxime Ripard 
> ---
>  Documentation/devicetree/bindings/interconnect/interconnect.txt | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt 
> b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> index 5a3c575b387a..e69fc2d992c3 100644
> --- a/Documentation/devicetree/bindings/interconnect/interconnect.txt
> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> @@ -51,6 +51,9 @@ interconnect-names : List of interconnect path name strings 
> sorted in the same
>interconnect-names to match interconnect paths with 
> interconnect
>specifier pairs.
>  
> + Reserved interconnect names:
> + * dma: Path from the device to the main memory of 
> the system

Bikeshed: As it's from the device to the main memory, maybe here we can
also denote this my calling the path dma-mem or dma-memory. For other
paths, we are trying to mention both the source and the destination and
maybe it would be good to be consistent although this is special one.

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v3 2/3] dt-bindings: drm/msm/a6xx: Document interconnect properties for GPU

2019-02-22 Thread Georgi Djakov
Hi,

On 12/20/18 19:30, Jordan Crouse wrote:
> Add documentation for the interconnect and interconnect-names bindings
> for the GPU node as detailed by bindings/interconnect/interconnect.txt.
> 
> Signed-off-by: Jordan Crouse 
> ---
> 
>  Documentation/devicetree/bindings/display/msm/gpu.txt | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/display/msm/gpu.txt 
> b/Documentation/devicetree/bindings/display/msm/gpu.txt
> index 9c89f4fdb8ca..5b04393dcb15 100644
> --- a/Documentation/devicetree/bindings/display/msm/gpu.txt
> +++ b/Documentation/devicetree/bindings/display/msm/gpu.txt
> @@ -20,6 +20,8 @@ Required properties:
> - qcom,adreno-630.2
>  - iommus: optional phandle to an adreno iommu instance
>  - operating-points-v2: optional phandle to the OPP operating points
> +- interconnect: optional phandle to a interconnect provider.  See

Nit: s/interconnect:/interconnects:/
Nit: s/a interconnect/an interconnect/

> +  ../interconnect/interconnect.txt for details.
>  - qcom,gmu: For GMU attached devices a phandle to the GMU device that will
>control the power for the GPU. Applicable targets:
>  - qcom,adreno-630.2
> @@ -68,6 +70,8 @@ Example a6xx (with GMU):
>  
>   operating-points-v2 = <&gpu_opp_table>;
>  
> + interconnects = <&rsc_hlos MASTER_GFX3D &rsc_hlos SLAVE_EBI1>;
> +
>   qcom,gmu = <&gmu>;
>   };
>  };
> 

Acked-by: Georgi Djakov 

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v3 1/3] drm/msm/a6xx: Add support for an interconnect path

2019-01-22 Thread Georgi Djakov
Hi Rob,

On 1/18/19 21:16, Rob Clark wrote:
> On Fri, Jan 18, 2019 at 1:06 PM Doug Anderson  wrote:
>>
>> Hi,
>>
>> On Thu, Dec 20, 2018 at 9:30 AM Jordan Crouse  wrote:
>>>
>>> Try to get the interconnect path for the GPU and vote for the maximum
>>> bandwidth to support all frequencies. This is needed for performance.
>>> Later we will want to scale the bandwidth based on the frequency to
>>> also optimize for power but that will require some device tree
>>> infrastructure that does not yet exist.
>>>
>>> v5: Remove hardcoded interconnect name and just use the default
>>
>> nit: ${SUBJECT} says v3, but this is v5.
>>
>> I'll put in my usual plug for considering "patman" to help post
>> patches.  Even though it lives in the u-boot git repo it's still a gem
>> for kernel work.
>> 
>>
>>
>>> @@ -85,6 +89,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, 
>>> int index)
>>> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
>>>
>>> gmu->freq = gmu->gpu_freqs[index];
>>> +
>>> +   /*
>>> +* Eventually we will want to scale the path vote with the 
>>> frequency but
>>> +* for now leave it at max so that the performance is nominal.
>>> +*/
>>> +   icc_set(gpu->icc_path, 0, MBps_to_icc(7216));
>>
>> You'll need to change icc_set() here to icc_set_bw() to match v13, AKA:
>>
>> - https://patchwork.kernel.org/patch/10766335/
>> - https://lkml.kernel.org/r/20190116161103.6937-2-georgi.dja...@linaro.org
>>
>>
>>> @@ -695,6 +707,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
>>> if (ret)
>>> goto out;
>>>
>>> +   /* Set the bus quota to a reasonable value for boot */
>>> +   icc_set(gpu->icc_path, 0, MBps_to_icc(3072));
>>
>> This will also need to change to icc_set_bw()
>>
>>
>>> @@ -781,6 +798,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
>>> /* Tell RPMh to power off the GPU */
>>> a6xx_rpmh_stop(gmu);
>>>
>>> +   /* Remove the bus vote */
>>> +   icc_set(gpu->icc_path, 0, 0);
>>
>> This will also need to change to icc_set_bw()
>>
>>
>> I have the same questions for this series that I had in response to
>> the email ("[v5 2/3] drm/msm/dpu: Integrate interconnect API in MDSS")
>> 
>>
>>
>> Copy / pasting here (with minor name changes) so folks don't have to
>> follow links / search email.
>>
>> ==
>>
>> I'm curious what the plan is for landing this series.   Rob / Gerogi:
>> do you have any preference?  Options I'd imagine:
>>
>> A) Wait until interconnect lands (in 5.1?) and land this through
>> msm-next in the version after (5.2?)
>>
>> B) Georgi provides an immutable branch for interconnect when his lands
>> (assuming he's landing via pull request) and that gets pulled into the
>> the relevant drm tree.
>>
>> C) Rob Acks this series and indicates that it should go in through
>> Gerogi's tree (probably only works if Georgi plans to send a pull
>> request).  If we're going this route then (IIUC) we'd want to land
>> this in Gerogi's tree sooner rather than later so it can get some bake
>> time?  NOTE: as per my prior reply, I believe Rob has already Acked
>> this patch.
>>
> 
> I'm ok to ack and have it land via Georgi's tree, if Georgi wants to
> do that.  Or otherwise, I could maybe coordinate w/ airlied to send a
> 2nd late msm-next pr including the gpu and display interconnect
> patches.

I'm fine either way. But it would be nice if both patches (this one and
the dt-bindings go together. The v6 of this patch applies cleanly to my
tree, but the next one (2/3) with the dt-bindings doesn't.

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v4 2/3] drm/msm/dpu: Integrate interconnect API in MDSS

2019-01-10 Thread Georgi Djakov
Hi Jayant,

On 12/21/18 08:20, Jayant Shekhar wrote:
> From: Sravanthi Kollukuduru 
> 
> The interconnect framework is designed to provide a
> standard kernel interface to control the settings of
> the interconnects on a SoC.
> 
> The interconnect API uses a consumer/provider-based model,
> where the providers are the interconnect buses and the
> consumers could be various drivers.
> 
> MDSS is one of the interconnect consumers which uses the
> interconnect APIs to get the path between endpoints and
> set its bandwidth/latency/QoS requirements for the given

Currently we set only bandwidth.

> interconnected path.
> 
> Changes in v2:
>   - Remove error log and unnecessary check (Jordan Crouse)
> 
> Changes in v3:
>   - Code clean involving variable name change, removal
> of extra paranthesis and variables (Matthias Kaehlcke)
> 
> Changes in v4:
>   - Add comments, spacings, tabs, proper port name
> and icc macro (Georgi Djakov)

The changes should not be part of the commit text, but should move below
the "---" line.

> 
> Signed-off-by: Sravanthi Kollukuduru 
> Signed-off-by: Jayant Shekhar 
> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c | 49 
> +---
>  1 file changed, 45 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c 
> b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> index 38576f8..fcaa71f 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> @@ -4,11 +4,15 @@
>   */
>  
>  #include "dpu_kms.h"
> +#include 
>  
>  #define to_dpu_mdss(x) container_of(x, struct dpu_mdss, base)
>  
>  #define HW_INTR_STATUS   0x0010
>  
> +/* Max BW defined in KBps */
> +#define MAX_BW   680
> +
>  struct dpu_mdss {
>   struct msm_mdss base;
>   void __iomem *mmio;
> @@ -16,8 +20,30 @@ struct dpu_mdss {
>   u32 hwversion;
>   struct dss_module_power mp;
>   struct dpu_irq_controller irq_controller;
> + struct icc_path *path[2];
> + u32 num_paths;
>  };
>  
> +static int dpu_mdss_parse_data_bus_icc_path(struct drm_device *dev,
> + struct dpu_mdss *dpu_mdss)

Nit: Please align to the open parenthesis.

Otherwise looks good to me.

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v4 2/3] drm/msm/dpu: Integrate interconnect API in MDSS

2019-01-10 Thread Georgi Djakov
Hi,

On 9.01.19 18:04, Doug Anderson wrote:
> Hi,
> 
> On Wed, Jan 9, 2019 at 6:38 AM Georgi Djakov  wrote:
>>
>> Hi Jayant,
>>
>> On 12/21/18 08:20, Jayant Shekhar wrote:
>>> From: Sravanthi Kollukuduru 
>>>
>>> The interconnect framework is designed to provide a
>>> standard kernel interface to control the settings of
>>> the interconnects on a SoC.
>>>
>>> The interconnect API uses a consumer/provider-based model,
>>> where the providers are the interconnect buses and the
>>> consumers could be various drivers.
>>>
>>> MDSS is one of the interconnect consumers which uses the
>>> interconnect APIs to get the path between endpoints and
>>> set its bandwidth/latency/QoS requirements for the given
>>
>> Currently we set only bandwidth.
>>
>>> interconnected path.
>>>
>>> Changes in v2:
>>>   - Remove error log and unnecessary check (Jordan Crouse)
>>>
>>> Changes in v3:
>>>   - Code clean involving variable name change, removal
>>> of extra paranthesis and variables (Matthias Kaehlcke)
>>>
>>> Changes in v4:
>>>   - Add comments, spacings, tabs, proper port name
>>> and icc macro (Georgi Djakov)
>>
>> The changes should not be part of the commit text, but should move below
>> the "---" line.
> 
> Drive-by comment that (I believe) they actually should be part of the
> commit text.  Convention for changes in the "drm" subsystem appears to
> be to include the revision log in the commit text.  This is unlike the
> rest of the kernel, but *shrug*.

Indeed! Thanks for the info! Then it's fine.

BR,
Georgi

> 
> Try, for instance:
> 
> git log --no-merges v4.19..v5.0-rc1 drivers/gpu/drm/msm | grep 'v[0-9]'
> 
> -Doug
> 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 2/3] drm/msm/dpu: Integrate interconnect API in MDSS

2018-12-20 Thread Georgi Djakov
Hi Sravanthi,

Thanks for the patch!

On 11/22/18 11:06, Sravanthi Kollukuduru wrote:
> The interconnect framework is designed to provide a
> standard kernel interface to control the settings of
> the interconnects on a SoC.
> 
> The interconnect API uses a consumer/provider-based model,
> where the providers are the interconnect buses and the
> consumers could be various drivers.
> 
> MDSS is one of the interconnect consumers which uses the
> interconnect APIs to get the path between endpoints and
> set its bandwidth/latency/QoS requirements for the given
> interconnected path.
> 
> Changes in v2:
>   - Remove error log and unnecessary check (Jordan Crouse)
> 
> Changes in v3:
>   - Code clean involving variable name change, removal
> of extra paranthesis and variables (Matthias Kaehlcke)
> 
> Signed-off-by: Sravanthi Kollukuduru 
> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c | 49 
> 
>  1 file changed, 44 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c 
> b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> index 38576f8b90b6..1387a6b1b39e 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
> @@ -4,10 +4,12 @@
>   */
>  
>  #include "dpu_kms.h"
> +#include 
>  
>  #define to_dpu_mdss(x) container_of(x, struct dpu_mdss, base)
>  
> -#define HW_INTR_STATUS   0x0010
> +#define HW_INTR_STATUS   0x0010

Unrelated change?

> +#define MAX_BW   680

In what units? Maybe add a comment.

>  
>  struct dpu_mdss {
>   struct msm_mdss base;
> @@ -16,8 +18,30 @@ struct dpu_mdss {
>   u32 hwversion;
>   struct dss_module_power mp;
>   struct dpu_irq_controller irq_controller;
> + struct icc_path *path[2];
> + u32 num_paths;
>  };
>  
> +static int dpu_mdss_parse_data_bus_icc_path(
> + struct drm_device *dev, struct dpu_mdss *dpu_mdss)

Nit: Lines should not end with a '('. Please move the first argument up:

static int dpu_mdss_parse_data_bus_icc_path(struct drm_device *dev,
struct dpu_mdss *dpu_mdss)

> +{
> + struct icc_path *path0 = of_icc_get(dev->dev, "port0");
> + struct icc_path *path1 = of_icc_get(dev->dev, "port1");

In DT it's preferred that the name contains both the source and
destination, so maybe of_icc_get(dev->dev, "mdp0-mem") etc.

> +
> + if (IS_ERR(path0))
> + return PTR_ERR(path0);
> +
> + dpu_mdss->path[0] = path0;
> + dpu_mdss->num_paths = 1;
> +
> + if (!IS_ERR(path1)) {
> + dpu_mdss->path[1] = path1;
> + dpu_mdss->num_paths++;
> + }
> +
> + return 0;
> +}
> +
>  static irqreturn_t dpu_mdss_irq(int irq, void *arg)
>  {
>   struct dpu_mdss *dpu_mdss = arg;
> @@ -127,7 +151,11 @@ static int dpu_mdss_enable(struct msm_mdss *mdss)
>  {
>   struct dpu_mdss *dpu_mdss = to_dpu_mdss(mdss);
>   struct dss_module_power *mp = &dpu_mdss->mp;
> - int ret;
> + int ret, i;
> + u64 avg_bw = dpu_mdss->num_paths ? MAX_BW/dpu_mdss->num_paths : 0;

Nit: Please add spaces around "/"

> +
> + for (i = 0; i < dpu_mdss->num_paths; i++)
> + icc_set(dpu_mdss->path[i], avg_bw, MAX_BW);

Now we have macros in the header, that can be used to specify the
bandwidth units. So please use kBps_to_icc or MBps_to_icc etc. If we
decide in the future to change the internal units, we will be able to do
it without touching the users.

Thanks,
Georgi

>  
>   ret = msm_dss_enable_clk(mp->clk_config, mp->num_clk, true);
>   if (ret)
> @@ -140,12 +168,15 @@ static int dpu_mdss_disable(struct msm_mdss *mdss)
>  {
>   struct dpu_mdss *dpu_mdss = to_dpu_mdss(mdss);
>   struct dss_module_power *mp = &dpu_mdss->mp;
> - int ret;
> + int ret, i;
>  
>   ret = msm_dss_enable_clk(mp->clk_config, mp->num_clk, false);
>   if (ret)
>   DPU_ERROR("clock disable failed, ret:%d\n", ret);
>  
> + for (i = 0; i < dpu_mdss->num_paths; i++)
> + icc_set(dpu_mdss->path[i], 0, 0);
> +
>   return ret;
>  }
>  
> @@ -155,6 +186,7 @@ static void dpu_mdss_destroy(struct drm_device *dev)
>   struct msm_drm_private *priv = dev->dev_private;
>   struct dpu_mdss *dpu_mdss = to_dpu_mdss(priv->mdss);
>   struct dss_module_power *mp = &dpu_mdss->mp;
> + int i;
>  
>   pm_runtime_disable(dev->dev);
>   _dpu_mdss_irq_domain_fini(dpu_mdss);
> @@ -162,6 +194,9 @@ static void dpu_mdss_destroy(struct drm_device *dev)
>   msm_dss_put_clk(mp->clk_config, mp->num_clk);
>   devm_kfree(&pdev->dev, mp->clk_config);
>  
> + for (i = 0; i < dpu_mdss->num_paths; i++)
> + icc_put(dpu_mdss->path[i]);
> +
>   if (dpu_mdss->mmio)
>   devm_iounmap(&pdev->dev, dpu_mdss->mmio);
>   dpu_mdss->mmio = NULL;
> @@ -200,6 +235,10 @@ int dpu_mdss_init(struct drm_device *dev)
>   }
>   dpu_mdss->mmio_len

Re: [PATCH v3 3/3] dt-bindings: msm/disp: Introduce interconnect bindings for MDSS on SDM845

2018-12-20 Thread Georgi Djakov
Hi Sravanthi,

Thanks for the patch!

On 11/22/18 11:06, Sravanthi Kollukuduru wrote:
> Add interconnect properties such as interconnect provider specifier
> , the edge source and destination ports which are required by the
> interconnect API to configure interconnect path for MDSS.
> 
> Changes in v2:
>   - none
> 
> Changes in v3:
>   - Remove common property definitions (Rob Herring)
> 
> Signed-off-by: Sravanthi Kollukuduru 
> ---
>  Documentation/devicetree/bindings/display/msm/dpu.txt | 9 +
>  1 file changed, 9 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/display/msm/dpu.txt 
> b/Documentation/devicetree/bindings/display/msm/dpu.txt
> index ad2e8830324e..d75b4360a4be 100644
> --- a/Documentation/devicetree/bindings/display/msm/dpu.txt
> +++ b/Documentation/devicetree/bindings/display/msm/dpu.txt
> @@ -28,6 +28,11 @@ Required properties:
>  - #address-cells: number of address cells for the MDSS children. Should be 1.
>  - #size-cells: Should be 1.
>  - ranges: parent bus address space is the same as the child bus address 
> space.
> +- interconnects : interconnect path specifier for MDSS according to
> +  Documentation/devicetree/bindings/interconnect/interconnect.txt. Should be
> +  2 paths corresponding to 2 AXI ports.
> +- interconnect-names : MDSS will have 2 port names to differentiate between 
> the
> +  2 interconnect paths defined with interconnect specifier.
>  
>  Optional properties:
>  - assigned-clocks: list of clock specifiers for clocks needing rate 
> assignment
> @@ -86,6 +91,10 @@ Example:
>   interrupt-controller;
>   #interrupt-cells = <1>;
>  
> + interconnects = <&qnoc 38 &qnoc 512>,
> + <&qnoc 39 &qnoc 512>;

Please use string names instead of hard-coded integers.

interconnects = <&rsc_hlos MASTER_MDP0 &rsc_hlos SLAVE_EBI1>,
<&rsc_hlos MASTER_MDP1 &rsc_hlos SLAVE_EBI1>;

> + interconnect-names = "port0", "port1";

We are trying to be more descriptive and include both the source and the
destination like: "mdp0-mem", "mdp1-mem"

> +
>   iommus = <&apps_iommu 0>;
>  
>   #address-cells = <2>;

Otherwise looks good.

Thanks,
Georgi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/1] drm/msm/a6xx: Add support for an interconnect path

2018-12-06 Thread Georgi Djakov
Hi Jordan,

Thanks for the patch!

On 11/29/18 19:26, Jordan Crouse wrote:
> Try to get the interconnect path for the GPU and vote for the maximum
> bandwidth to support all frequencies. This is needed for performance.
> Later we will want to scale the bandwidth based on the frequency to
> also optimize for power but that will require some device tree
> infrastructure that does not yet exist.
> 
> v3: Absolute bandwidth values should be specified in KBps

btw. now i have also included macros in the header, that can be used to
specify the bandwidth units. So now you can now use kBps_to_icc or
MBps_to_icc etc. If we decide at some point that we change the units we
use internally, we will not have update all the users.

> 
> Signed-off-by: Jordan Crouse 
> ---
>  drivers/gpu/drm/msm/adreno/a6xx_gmu.c   | 20 
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c |  9 +
>  drivers/gpu/drm/msm/msm_gpu.h   |  3 +++
>  3 files changed, 32 insertions(+)
> 
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c 
> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> index 546599a7ab05..fe0f5b10fd9c 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> @@ -3,6 +3,7 @@
>  
>  #include 
>  #include 
> +#include 
>  #include 

Alphabetic order maybe?

>  #include "a6xx_gpu.h"
> @@ -63,6 +64,9 @@ static bool a6xx_gmu_gx_is_on(struct a6xx_gmu *gmu)
>  
>  static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index)
>  {
> + struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
> + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
> + struct msm_gpu *gpu = &adreno_gpu->base;
>   int ret;
>  
>   gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
> @@ -85,6 +89,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int 
> index)
>   dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
>  
>   gmu->freq = gmu->gpu_freqs[index];
> +
> + /*
> +  * Eventually we will want to scale the path vote with the frequency but
> +  * for now leave it t at max so that the performance is nominal.

An extra t above.

> +  */
> + icc_set(gpu->icc_path, 0, 7216000);
>  }
>  
>  void a6xx_gmu_set_freq(struct msm_gpu *gpu, unsigned long freq)
> @@ -680,6 +690,8 @@ int a6xx_gmu_reset(struct a6xx_gpu *a6xx_gpu)
>  
>  int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
>  {
> + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
> + struct msm_gpu *gpu = &adreno_gpu->base;
>   struct a6xx_gmu *gmu = &a6xx_gpu->gmu;
>   int status, ret;
>  
> @@ -695,6 +707,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
>   if (ret)
>   goto out;
>  
> + /* Set the bus quota to a reasonable value for boot */
> + icc_set(gpu->icc_path, 0, 3072000);

Here you can do:
icc_set(gpu->icc_path, 0, MBps_to_icc(3072));

> +
>   a6xx_gmu_irq_enable(gmu);
>  
>   /* Check to see if we are doing a cold or warm boot */
> @@ -735,6 +750,8 @@ bool a6xx_gmu_isidle(struct a6xx_gmu *gmu)
>  
>  int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
>  {
> + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
> + struct msm_gpu *gpu = &adreno_gpu->base;
>   struct a6xx_gmu *gmu = &a6xx_gpu->gmu;
>   u32 val;
>  
> @@ -781,6 +798,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
>   /* Tell RPMh to power off the GPU */
>   a6xx_rpmh_stop(gmu);
>  
> + /* Remove the bus vote */
> + icc_set(gpu->icc_path, 0, 0);
> +
>   clk_bulk_disable_unprepare(gmu->nr_clocks, gmu->clocks);
>  
>   pm_runtime_put_sync(gmu->dev);
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c 
> b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> index 93d70f4a2154..9bab491912cf 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> @@ -21,6 +21,7 @@
>  #include 
>  #include 
>  #include 
> +#include 

Alphabetic order?

>  #include "adreno_gpu.h"
>  #include "msm_gem.h"
>  #include "msm_mmu.h"
> @@ -695,6 +696,11 @@ static int adreno_get_pwrlevels(struct device *dev,
>  
>   DBG("fast_rate=%u, slow_rate=2700", gpu->fast_rate);
>  
> + /* Check for an interconnect path for the bus */
> + gpu->icc_path = of_icc_get(dev, "port0");

I was wondering if port0 is appropriate name here. I assume this is the
port to DDR. Maybe we could name it gfx-mem or gpu-mem. Are there any
other interconnects that need to be scaled on the a6xx?

> + if (IS_ERR(gpu->icc_path))
> + gpu->icc_path = NULL;
> +
>   return 0;
>  }
>  
> @@ -733,10 +739,13 @@ int adreno_gpu_init(struct drm_device *drm, struct 
> platform_device *pdev,
>  
>  void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu)
>  {
> + struct msm_gpu *gpu = &adreno_gpu->base;
>   unsigned int i;
>  
>   for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++)
>   release_firmware(adreno_gpu->fw[i]);
>  
> + icc_put(gpu->icc_path);
> +
>   msm_g

Re: [PATCH 6/9] PM / OPP: dt-bindings: Add opp-interconnect-bw

2018-09-27 Thread Georgi Djakov
Hi Jordan,

Thanks for the patch!

On 08/27/2018 06:11 PM, Jordan Crouse wrote:
> Add the "opp-interconnect-bw" property to specify the
> average and peak bandwidth for an interconnect path for
> a specific operating power point. A separate bandwidth
> pair can be specified for each of the interconnects
> defined for the device by appending the interconnect
> name to the property.
> 
> Signed-off-by: Jordan Crouse 
> ---
>  Documentation/devicetree/bindings/opp/opp.txt | 36 +++
>  1 file changed, 36 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/opp/opp.txt 
> b/Documentation/devicetree/bindings/opp/opp.txt
> index c396c4c0af92..d714c084f36d 100644
> --- a/Documentation/devicetree/bindings/opp/opp.txt
> +++ b/Documentation/devicetree/bindings/opp/opp.txt
> @@ -170,6 +170,11 @@ Optional properties:
>functioning of the current device at the current OPP (where this property 
> is
>present).
>  
> +- opp-interconnect-bw-: This is an array of pairs specifying the 
> average
> +  and peak bandwidth in bytes per second for the interconnect path known by
> +  'name'.  This should match the name(s) specified by interconnect-names in 
> the
> +  device definition.
> +
>  Example 1: Single cluster Dual-core ARM cortex A9, switch DVFS states 
> together.
>  
>  / {
> @@ -543,3 +548,34 @@ Example 6: opp-microvolt-, opp-microamp-:
>   };
>   };
>  };
> +
> +Example 7: opp-interconnect-bw:
> +(example: leaf device with frequency and BW quotas)
> +
> +/ {
> + soc {
> + gpu@500 {
> + ...
> + interconnects = <&qnoc 26 &qnoc 512>;
> + interconnect-names = "port0";
> + ...
> + operating-points-v2 = <&gpu_opp_table>;
> + };
> + };
> +
> + gpu_opp_table: opp_table0 {
> + compatible = "operating-points-v2";
> +
> + opp-71000 {
> + op-hz = /bits/ 64 <71000>;
> + /* Set peak bandwidth @ 7216000 KB/s */
> + opp-interconnect-bw-port0 = /bits/ 64 <0 721600>;

This seems a bit long. I would suggest the following instead.
If there is only one path:
/* average bandwidth = 0 KB/s,  peak bandwidth = 7216000 KB/s */
opp-bw-KBps = <0 7216000>;
or  
opp-bw-MBps = <0 7216>;

If there are multiple paths:
opp-bw-KBps-port0 = <0 7216000>;
opp-bw-KBps-port1 = <0 100>;

The above follows a convention similar to opp-microvolt, where at
runtime the platform can pick a  and a matching opp-bw-KBps-
property. If the platform doesn't pick a specific  or  does
not match with any of the opp-bw-KBps- properties, then
opp-bw-KBps shall be used if present.
For now supporting only KBps values seems enough to cover all present
platforms, so we can start with this and based on the requirements of
future platforms we can add MBps etc.

Thanks,
Georgi

> + };
> +
> + opp-21000 {
> + op-hz = /bits/ 64 <21000>;
> + /* Set peak bandwidth @ 120 KB/s */
> + opp-interconnect-bw-port0 = /bits/ 64 <0 12>;
> + };
> + };
> +};
> 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel