Re: [PATCH v3 00/12] drm/imx/ipuv3: switch LDB and parallel-display driver to use drm_bridge_connector

2024-06-03 Thread Chris Healy
On Mon, Jun 3, 2024 at 3:12 AM Dmitry Baryshkov
 wrote:
>
> On Sun, Jun 02, 2024 at 08:25:39PM -0700, Chris Healy wrote:
> > On an i.MX53 QSB with HDMI daughter board, this patch series is:
> >
> > Tested-by: Chris Healy 
>
> Thank you! I assume this is imx53-qsrb-hdmi ?

Yes
>
> >
> > HDMI output still works correctly and the bridges file reflects the changes:
> >
> > Before:
> >
> > root:/sys/kernel/debug/dri/display-subsystem/encoder-0 cat bridges
> > bridge[0]: 0xc0fa76d8
> > type: [0] Unknown
> > ops: [0x0]
> > bridge[1]: 0xc0fba03c
> > type: [0] Unknown
> > OF: /soc/bus@6000/i2c@63fc4000/bridge-hdmi@39:sil,sii9022
> > ops: [0x7] detect edid hpd
> >
> >
> > After:
> >
> > root:/sys/kernel/debug/dri/display-subsystem/encoder-0 cat bridges
> > bridge[0]: 0xc0fa76d8
> > type: [0] Unknown
> > ops: [0x0]
> > bridge[1]: 0xc0fb9f5c
> > type: [0] Unknown
> > OF: /soc/bus@6000/i2c@63fc4000/bridge-hdmi@39:sil,sii9022
> > ops: [0x7] detect edid hpd
> > bridge[2]: 0xc0fb9794
> > type: [11] HDMI-A
> > OF: /connector-hdmi:hdmi-connector
> > ops: [0x0]
> >
> > On Sun, Jun 2, 2024 at 5:04 AM Dmitry Baryshkov
> >  wrote:
> > >
> > > The IPUv3 DRM i.MX driver contains several codepaths for different
> > > usescases: both LDB and paralllel-display drivers handle next-bridge,
> > > panel and the legacy display-timings DT node on their own.
> > >
> > > Drop unused ddc-i2c-bus and edid handling (none of the DT files merged
> > > upstream ever used these features), switch to panel-bridge driver,
> > > removing the need to handle drm_panel codepaths separately and finally
> > > switch to drm_bridge_connector, removing requirement for the downstream
> > > bridges to create drm_connector on their own.
> > >
> > > This has been tested on the iMX53 with the DPI panel attached to LDB via
> > > LVDS decoder, using all possible usecases (lvds-codec + panel, panel
> > > linked directly to LDB node and the display-timings node).
> > >
> > > To be able to test on the iMX53 QSRB with the HDMI cape apply [1], [2]
> > >
> > > [1] 
> > > https://lore.kernel.org/all/20240514030718.533169-1-victor@nxp.com/
> > > [2] 
> > > https://lore.kernel.org/all/20240602-imx-sii902x-defconfig-v1-1-71a6c382b...@linaro.org/
> > >
> > > Signed-off-by: Dmitry Baryshkov 
> > > ---
> > > Changes in v3:
> > > - Notice (soft) dependencies in the cover letter (Chris)
> > > - Select DRM_BRIDGE instead of depending on it (Philipp)
> > > - Dropped unused selection of DRM_PANEL (Philipp)
> > > - Added missing include of  to parallel-display.c
> > >   (Philipp)
> > > - Link to v2: 
> > > https://lore.kernel.org/r/20240331-drm-imx-cleanup-v2-0-d81c1d1c1...@linaro.org
> > >
> > > Changes in v2:
> > > - Fixed drm_bridge_attach flags in imx/parallel-display driver.
> > > - Moved the legacy bridge to drivers/gpu/drm/bridge
> > > - Added missing EXPORT_SYMBOL_GPL to the iMX legacy bridge
> > > - Link to v1: 
> > > https://lore.kernel.org/r/20240311-drm-imx-cleanup-v1-0-e104f05ca...@linaro.org
> > >
> > > ---
> > > Dmitry Baryshkov (12):
> > >   dt-bindings: display: fsl-imx-drm: drop edid property support
> > >   dt-bindings: display: imx/ldb: drop ddc-i2c-bus property
> > >   drm/imx: cleanup the imx-drm header
> > >   drm/imx: parallel-display: drop edid override support
> > >   drm/imx: ldb: drop custom EDID support
> > >   drm/imx: ldb: drop custom DDC bus support
> > >   drm/imx: ldb: switch to drm_panel_bridge
> > >   drm/imx: parallel-display: switch to drm_panel_bridge
> > >   drm/imx: add internal bridge handling display-timings DT node
> > >   drm/imx: ldb: switch to imx_legacy_bridge / drm_bridge_connector
> > >   drm/imx: parallel-display: switch to imx_legacy_bridge / 
> > > drm_bridge_connector
> > >   drm/imx: move imx_drm_connector_destroy to imx-tve
> > >
> > >  .../bindings/display/imx/fsl-imx-drm.txt   |   2 -
> > >  .../devicetree/bindings/display/imx/ldb.txt|   1 -
> > >  drivers/gpu/drm/bridge/imx/Kconfig |  10 +
> > >  drivers/gpu/drm/bridge/imx/Makefile|   1 +
> > >  drivers/g

Re: [PATCH v3 00/12] drm/imx/ipuv3: switch LDB and parallel-display driver to use drm_bridge_connector

2024-06-02 Thread Chris Healy
On an i.MX53 QSB with HDMI daughter board, this patch series is:

Tested-by: Chris Healy 

HDMI output still works correctly and the bridges file reflects the changes:

Before:

root:/sys/kernel/debug/dri/display-subsystem/encoder-0 cat bridges
bridge[0]: 0xc0fa76d8
type: [0] Unknown
ops: [0x0]
bridge[1]: 0xc0fba03c
type: [0] Unknown
OF: /soc/bus@6000/i2c@63fc4000/bridge-hdmi@39:sil,sii9022
ops: [0x7] detect edid hpd


After:

root:/sys/kernel/debug/dri/display-subsystem/encoder-0 cat bridges
bridge[0]: 0xc0fa76d8
type: [0] Unknown
ops: [0x0]
bridge[1]: 0xc0fb9f5c
type: [0] Unknown
OF: /soc/bus@6000/i2c@63fc4000/bridge-hdmi@39:sil,sii9022
ops: [0x7] detect edid hpd
bridge[2]: 0xc0fb9794
type: [11] HDMI-A
OF: /connector-hdmi:hdmi-connector
ops: [0x0]

On Sun, Jun 2, 2024 at 5:04 AM Dmitry Baryshkov
 wrote:
>
> The IPUv3 DRM i.MX driver contains several codepaths for different
> usescases: both LDB and paralllel-display drivers handle next-bridge,
> panel and the legacy display-timings DT node on their own.
>
> Drop unused ddc-i2c-bus and edid handling (none of the DT files merged
> upstream ever used these features), switch to panel-bridge driver,
> removing the need to handle drm_panel codepaths separately and finally
> switch to drm_bridge_connector, removing requirement for the downstream
> bridges to create drm_connector on their own.
>
> This has been tested on the iMX53 with the DPI panel attached to LDB via
> LVDS decoder, using all possible usecases (lvds-codec + panel, panel
> linked directly to LDB node and the display-timings node).
>
> To be able to test on the iMX53 QSRB with the HDMI cape apply [1], [2]
>
> [1] https://lore.kernel.org/all/20240514030718.533169-1-victor@nxp.com/
> [2] 
> https://lore.kernel.org/all/20240602-imx-sii902x-defconfig-v1-1-71a6c382b...@linaro.org/
>
> Signed-off-by: Dmitry Baryshkov 
> ---
> Changes in v3:
> - Notice (soft) dependencies in the cover letter (Chris)
> - Select DRM_BRIDGE instead of depending on it (Philipp)
> - Dropped unused selection of DRM_PANEL (Philipp)
> - Added missing include of  to parallel-display.c
>   (Philipp)
> - Link to v2: 
> https://lore.kernel.org/r/20240331-drm-imx-cleanup-v2-0-d81c1d1c1...@linaro.org
>
> Changes in v2:
> - Fixed drm_bridge_attach flags in imx/parallel-display driver.
> - Moved the legacy bridge to drivers/gpu/drm/bridge
> - Added missing EXPORT_SYMBOL_GPL to the iMX legacy bridge
> - Link to v1: 
> https://lore.kernel.org/r/20240311-drm-imx-cleanup-v1-0-e104f05ca...@linaro.org
>
> ---
> Dmitry Baryshkov (12):
>   dt-bindings: display: fsl-imx-drm: drop edid property support
>   dt-bindings: display: imx/ldb: drop ddc-i2c-bus property
>   drm/imx: cleanup the imx-drm header
>   drm/imx: parallel-display: drop edid override support
>   drm/imx: ldb: drop custom EDID support
>   drm/imx: ldb: drop custom DDC bus support
>   drm/imx: ldb: switch to drm_panel_bridge
>   drm/imx: parallel-display: switch to drm_panel_bridge
>   drm/imx: add internal bridge handling display-timings DT node
>   drm/imx: ldb: switch to imx_legacy_bridge / drm_bridge_connector
>   drm/imx: parallel-display: switch to imx_legacy_bridge / 
> drm_bridge_connector
>   drm/imx: move imx_drm_connector_destroy to imx-tve
>
>  .../bindings/display/imx/fsl-imx-drm.txt   |   2 -
>  .../devicetree/bindings/display/imx/ldb.txt|   1 -
>  drivers/gpu/drm/bridge/imx/Kconfig |  10 +
>  drivers/gpu/drm/bridge/imx/Makefile|   1 +
>  drivers/gpu/drm/bridge/imx/imx-legacy-bridge.c |  85 +
>  drivers/gpu/drm/imx/ipuv3/Kconfig  |  10 +-
>  drivers/gpu/drm/imx/ipuv3/imx-drm-core.c   |   7 -
>  drivers/gpu/drm/imx/ipuv3/imx-drm.h|  14 --
>  drivers/gpu/drm/imx/ipuv3/imx-ldb.c| 203 
> +
>  drivers/gpu/drm/imx/ipuv3/imx-tve.c|   8 +-
>  drivers/gpu/drm/imx/ipuv3/parallel-display.c   | 139 +++---
>  include/drm/bridge/imx.h   |  13 ++
>  12 files changed, 187 insertions(+), 306 deletions(-)
> ---
> base-commit: 850ca533e572247b6f71dafcbf7feb0359350963
> change-id: 20240310-drm-imx-cleanup-10746a9b71f5
>
> Best regards,
> --
> Dmitry Baryshkov 
>


Re: [PATCH v3 11/12] drm/imx: parallel-display: switch to imx_legacy_bridge / drm_bridge_connector

2024-06-02 Thread Chris Healy
On Sun, Jun 2, 2024 at 5:04 AM Dmitry Baryshkov
 wrote:
>
> Use the imx_legacy bridge driver instead of handlign display modes via
> the connector node.

fix spelling of "handling"
>
> All existing usecases already support attaching using
> the DRM_BRIDGE_ATTACH_NO_CONNECTOR flag, while the imx_legacy bridge
> doesn't support creating connector at all. Switch to
> drm_bridge_connector at the same time.
>
> Signed-off-by: Dmitry Baryshkov 
> ---
>  drivers/gpu/drm/imx/ipuv3/parallel-display.c | 100 
> ++-
>  1 file changed, 20 insertions(+), 80 deletions(-)
>
> diff --git a/drivers/gpu/drm/imx/ipuv3/parallel-display.c 
> b/drivers/gpu/drm/imx/ipuv3/parallel-display.c
> index 9ac2a94fa62b..70f62e89622e 100644
> --- a/drivers/gpu/drm/imx/ipuv3/parallel-display.c
> +++ b/drivers/gpu/drm/imx/ipuv3/parallel-display.c
> @@ -12,19 +12,18 @@
>  #include 
>  #include 
>
> -#include 
> -
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
>  #include 
> +#include 
>
>  #include "imx-drm.h"
>
>  struct imx_parallel_display_encoder {
> -   struct drm_connector connector;
> struct drm_encoder encoder;
> struct drm_bridge bridge;
> struct imx_parallel_display *pd;
> @@ -33,51 +32,14 @@ struct imx_parallel_display_encoder {
>  struct imx_parallel_display {
> struct device *dev;
> u32 bus_format;
> -   u32 bus_flags;
> -   struct drm_display_mode mode;
> struct drm_bridge *next_bridge;
>  };
>
> -static inline struct imx_parallel_display *con_to_imxpd(struct drm_connector 
> *c)
> -{
> -   return container_of(c, struct imx_parallel_display_encoder, 
> connector)->pd;
> -}
> -
>  static inline struct imx_parallel_display *bridge_to_imxpd(struct drm_bridge 
> *b)
>  {
> return container_of(b, struct imx_parallel_display_encoder, 
> bridge)->pd;
>  }
>
> -static int imx_pd_connector_get_modes(struct drm_connector *connector)
> -{
> -   struct imx_parallel_display *imxpd = con_to_imxpd(connector);
> -   struct device_node *np = imxpd->dev->of_node;
> -   int num_modes;
> -
> -   if (np) {
> -   struct drm_display_mode *mode = 
> drm_mode_create(connector->dev);
> -   int ret;
> -
> -   if (!mode)
> -   return 0;
> -
> -   ret = of_get_drm_display_mode(np, &imxpd->mode,
> - &imxpd->bus_flags,
> - OF_USE_NATIVE_MODE);
> -   if (ret) {
> -   drm_mode_destroy(connector->dev, mode);
> -   return 0;
> -   }
> -
> -   drm_mode_copy(mode, &imxpd->mode);
> -   mode->type |= DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
> -   drm_mode_probed_add(connector, mode);
> -   num_modes++;
> -   }
> -
> -   return num_modes;
> -}
> -
>  static const u32 imx_pd_bus_fmts[] = {
> MEDIA_BUS_FMT_RGB888_1X24,
> MEDIA_BUS_FMT_BGR888_1X24,
> @@ -171,7 +133,6 @@ static int imx_pd_bridge_atomic_check(struct drm_bridge 
> *bridge,
>  {
> struct imx_crtc_state *imx_crtc_state = to_imx_crtc_state(crtc_state);
> struct drm_display_info *di = &conn_state->connector->display_info;
> -   struct imx_parallel_display *imxpd = bridge_to_imxpd(bridge);
> struct drm_bridge_state *next_bridge_state = NULL;
> struct drm_bridge *next_bridge;
> u32 bus_flags, bus_fmt;
> @@ -183,10 +144,8 @@ static int imx_pd_bridge_atomic_check(struct drm_bridge 
> *bridge,
>
> if (next_bridge_state)
> bus_flags = next_bridge_state->input_bus_cfg.flags;
> -   else if (di->num_bus_formats)
> -   bus_flags = di->bus_flags;
> else
> -   bus_flags = imxpd->bus_flags;
> +   bus_flags = di->bus_flags;
>
> bus_fmt = bridge_state->input_bus_cfg.format;
> if (!imx_pd_format_supported(bus_fmt))
> @@ -202,19 +161,16 @@ static int imx_pd_bridge_atomic_check(struct drm_bridge 
> *bridge,
> return 0;
>  }
>
> -static const struct drm_connector_funcs imx_pd_connector_funcs = {
> -   .fill_modes = drm_helper_probe_single_connector_modes,
> -   .destroy = imx_drm_connector_destroy,
> -   .reset = drm_atomic_helper_connector_reset,
> -   .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
> -   .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
> -};
> +static int imx_pd_bridge_attach(struct drm_bridge *bridge,
> +   enum drm_bridge_attach_flags flags)
> +{
> +   struct imx_parallel_display *imxpd = bridge_to_imxpd(bridge);
>
> -static const struct drm_connector_helper_funcs imx_pd_connector_helper_funcs 
> = {
> -   .get_modes = imx_pd_connector_get_modes,
> -};
> +   return drm_bridge_attach(bridge->encoder, imxpd->next_bridge, bridge, 
> flags);
>

Re: drm/etnaviv: slow down FE idle polling

2023-06-15 Thread Chris Healy
Jingfeng,

Does your design have any bus PMU counters that can be used to measure
DRAM bandwidth of the 3D GPU directly or even indirectly?

Regards,

Chris

On Thu, Jun 15, 2023 at 2:53 AM Lucas Stach  wrote:
>
> Am Donnerstag, dem 15.06.2023 um 17:37 +0800 schrieb Sui Jingfeng:
> > Hi,
> >
> [...]
> > > > > > > +
> > > > > > > +   /*
> > > > > > > +* Choose number of wait cycles to target a ~30us (1/32768) 
> > > > > > > max latency
> > > > > > > +* until new work is picked up by the FE when it polls in the 
> > > > > > > idle loop.
> > > > > > > +*/
> > > > > > > +   gpu->fe_waitcycles = min(gpu->base_rate_core >> (15 - 
> > > > > > > gpu->freq_scale),
> > > > > > > +0xUL);
> > > > > > This patch is NOT effective on our hardware GC1000 v5037 (ls7a1000 +
> > > > > > ls3a5000).
> > > > > >
> > > > > > As the gpu->base_rate_core is 0,  so, in the end gpu->fe_waitcycles 
> > > > > > is
> > > > > > also zero.
> > > > > >
> > > > > Uh, that's a problem, as the patch will then have the opposite effect
> > > > > on your platform by speeding up the idle loop. Thanks for catching
> > > > > this! I'll improve the patch to keep a reasonable amount of wait 
> > > > > cycles
> > > > > in this case.
> > > > It's OK, no big problem as far as I can see. (it my platform's problem,
> > > > not your problem)
> > > >
> > > It will become a problem as it eats up the bandwidth that you want to
> > > spend for real graphic work.
> > >
> > > > Merge it is also OK, if we found something wrong we could fix it with a
> > > > another patch.
> > > >
> > > Hmm.. I think that the fix for this problem is more or less an extra
> > > if so I would love to see a proper fix
> > > before this patch gets merged.
>
> Right, we don't merge known broken stuff. We are all humans and bugs
> and oversights happen, but we don't knowingly regress things.
>
> >
> > It just no effect(at least I can't find).
> >
> > I have tried, The score of glmark2 does not change, not become better,
> > not become worse.
>
> That's because it only affects your system when the GPU is idle but
> isn't in runtime PM yet. If you measure the DRAM bandwidth in that time
> window you'll see that the GPU now uses much more bandwidth, slowing
> down other workloads.
>
> Regards,
> Lucas
>


Re: [PATCH 1/5] drm/msm/adreno: Use OPP for every GPU generation

2023-02-24 Thread Chris Healy
I may be missing something, but looking at the code path for a2xx,
it's not clear to me how this would work with SoCs with a2xx that
don't support 200MHz for GPU frequency.  For example, the NXP i.MX51
requires the A205 GPU to run at 166MHz while the NXP i.MX53 requires
the A205 GPU to run at 200MHz.


Re: IMX6 etnaviv issue

2022-10-23 Thread Chris Healy
I can't speak to why you are experiencing issues when using the GPU,
but in the examples you gave, the example that is working is using a
SW based GL implementation instead of the real GPU.  This can be
determined by looking at the GL_RENDERER string to see if it mentions
a Vivante GPU or something else (like LLVMPIPE).  It's quite likely
that if you were using the real GPU with etnaviv in Mesa with the
older config you would also experience similar issues.  As such, we
shouldn't consider this a regression between the two Ubuntu versions.

One thing you may want to try doing is run with Mesa 22.2.1 and TOT to
see if either of these address any of the issues you are experiencing.

On Thu, Oct 20, 2022 at 1:44 PM Tim Harvey  wrote:
>
> Greetings,
>
> I use a standard Ubuntu 20.04 focal rootfs with a mainline kernel on
> an IMX6Q based board and have had no issues using things like gnome
> desktop, glxgears, glmark2 however recently I updated the rootfs to
> Ubuntu 22.04 jammy using the same mainline kernel and now I see some
> issues. I've replicated the issue with several kernel versions
> including 5.4, 5.10, 5.15 and 6.0 so I would say this is not a kernel
> regression but something related to the graphics stack being used
> which I'm not very familiar with.
>
> The issues I see can be described as:
> - mouse cursor is incorrect (looks like a hatched square)
> - glxgears shows some sort of sync/jitter issue and has a fairly low framerate
> - glmark2 shows a some sync issues then after a few seconds results in
> a GPU hang
>
> My ubuntu focal image that appears to work fine has the following:
> gnome 3.36.5-0
> xserver-xorg 1:7.7+19
> xserver-xorg-core 2:1.20.13-1
> xwayland 2:1.20.13-1
> glmark2 2021.02
> mesa-utils 8.4.0-1
> GL_VENDOR: Mesa/X.org
> GL_RENDERER: llvmpipe (LLVM 12.0.0, 128 bits)
> GL_VERSION: 3.1 Mesa 21.2.6
>
> My ubuntu jammy image that has the issues has the following:
> gnome-41.7-0
> xserver-xorg 1:7.7+23
> xserver-xorg-core 2:21.1.3-2
> xwayland 2:22.1.1-1
> glmark2 2021.02-0
> mesa-utils 8.4.0-1
> GL_VENDOR: etnaviv
> GL_RENDERER: Vivantte GC2000 rev 5108
> GL_VERSION: 2.1 Mesa 22.0.5
>
> Does anyone have any ideas on what might be going on here? I apologize
> for my lack of knowledge regarding the software layers on top of the
> etnaviv kernel driver being used here.
>
> Best Regards,
>
> Tim


Re: [PATCH 0/5] drm/etnaviv: Ignore MC bit when checking for runtime suspend

2020-06-26 Thread Chris Healy
Would this power difference with the GPU also apply with the GC3000 in the
i.MX6qp or the GC2000 in the i.MX6q?

On Thu, Jun 25, 2020 at 8:04 AM Guido Günther  wrote:

> Hi,
> On Tue, Mar 03, 2020 at 12:55:04PM +0100, Lucas Stach wrote:
> > On Mo, 2020-03-02 at 20:13 +0100, Guido Günther wrote:
> > > At least GC7000 fails to enter runtime suspend for long periods of
> time since
> > > the MC becomes busy again even when the FE is idle. The rest of the
> series
> > > makes detecting similar issues easier to debug in the future by
> checking
> > > all known bits in debugfs and also warning in the EBUSY case.
> >
> > Thanks, series applied to etnaviv/next.
> >
> > > Tested on GC7000 with a reduced runtime delay of 50ms. Patches are
> > > against next-20200226.
> >
> > I've already wondered if 200ms is too long, 50ms sounds more
> > reasonable. Do you have any numbers on the power draw on the i.MX8M
> > with idle GPU, vs. being fully power gated?
>
> The difference is at least 250mW. It makes a huge difference over here.
> We hit
>
> https://lore.kernel.org/dri-devel/20200614064601.7872-1-navid.emamdo...@gmail.com/
> recently and you notice instantly when that happens when looking at the
> SoC temperature.
>
> Cheers,
>  -- Guido
> >
> > Regards,
> > Lucas
> >
> > > Thanks to Lucas Stach for pointing me in the right direction.
> > >
> > > Guido Günther (5):
> > >   drm/etnaviv: Fix typo in comment
> > >   drm/etnaviv: Update idle bits
> > >   drm/etnaviv: Consider all kwnown idle bits in debugfs
> > >   drm/etnaviv: Ignore MC when checking runtime suspend idleness
> > >   drm/etnaviv: Warn when GPU doesn't idle fast enough
> > >
> > >  drivers/gpu/drm/etnaviv/etnaviv_gpu.c  | 26 ++
> > >  drivers/gpu/drm/etnaviv/state_hi.xml.h |  7 +++
> > >  2 files changed, 29 insertions(+), 4 deletions(-)
> > >
> >
> ___
> etnaviv mailing list
> etna...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/etnaviv
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] qcom-scm: Include header

2019-01-14 Thread Chris Healy
There we go, I was definitely confused...  Tnx

On Mon, Jan 14, 2019 at 12:02 PM Fabio Estevam  wrote:
>
> Hi Chris,
>
> On Mon, Jan 14, 2019 at 5:54 PM Chris Healy  wrote:
> >
> > Perhaps I am confused but it appears that this patch has already
> > landed upstream and got included in 5.0-rc2:
>
> The patch that Amit is referring is the following entry in MAINTAINERS file:
>
> +F: include/linux/qcom*
>
> so that the proper lists can be put on Cc on future changes of this file.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] qcom-scm: Include header

2019-01-14 Thread Chris Healy
Perhaps I am confused but it appears that this patch has already
landed upstream and got included in 5.0-rc2:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/include/linux/qcom_scm.h?h=v5.0-rc2&id=2076607a20bd4dfba699185616cbbbce06d3fa59

On Mon, Jan 14, 2019 at 11:51 AM Amit Kucheria
 wrote:
>
> On Sun, Dec 30, 2018 at 1:21 AM Andy Gross  wrote:
> >
> > On Sat, Dec 29, 2018 at 10:19:32AM -0200, Fabio Estevam wrote:
> > > Hi Bjorn,
> > >
> > > On Fri, Dec 28, 2018 at 10:27 PM Bjorn Andersson
> > >  wrote:
> > >
> > > > Sorry about that, I forgot that the header file is not covered by the
> > > > MAINTAINERS file.
> > > >
> > > > Your second patch looks good, but I'm hoping we can merge the upcoming
> > > > v3 of Amit's patch right after the merge window. It fixes this and a lot
> > > > of other pieces where we would like to include linux-arm-msm@:
> > > >
> > > > https://lore.kernel.org/lkml/d153a86748f99526e7790bfc4ef8781a2016fd51.1545126964.git.amit.kuche...@linaro.org/
> > >
> > > Amit's patch adds the following entry:
> > >
> > > +F: include/linux/*/qcom*
> > >
> > > but it does not catch include/linux/qcom_scm.h
> > >
> > > It also needs
> > >
> > > +F: include/linux/qcom*
> > >
> > > in order to catch include/linux/qcom-geni-se.h  and 
> > > include/linux/qcom_scm.h
> > >
> > > I can add that entry after Amit's patch gets applied.
> >
> > Or I can add it to Amit's.  I'll ping him to make sure that's ok.
> >
>
> I'd forgotten about this patch! Just sent out v3 which is still
> missing "F: include/linux/qcom*".
>
> Let me know if you want me to send out v4 with this added.
>
> Regards,
> Amit
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 7/7] drm/bridge: tc358767: add copyright lines

2017-11-08 Thread Chris Healy
Acked-by: Chris Healy 

On Tue, Nov 7, 2017 at 8:56 AM, Andrey Gusakov
 wrote:
> Add copyright lines for Zodiac who paid for driver development.
>
> Signed-off-by: Andrey Gusakov 
> ---
>  drivers/gpu/drm/bridge/tc358767.c |2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/drivers/gpu/drm/bridge/tc358767.c 
> b/drivers/gpu/drm/bridge/tc358767.c
> index 37e33f2..69d2af3 100644
> --- a/drivers/gpu/drm/bridge/tc358767.c
> +++ b/drivers/gpu/drm/bridge/tc358767.c
> @@ -6,6 +6,8 @@
>   *
>   * Copyright (C) 2016 Pengutronix, Philipp Zabel 
>   *
> + * Copyright (C) 2016 Zodiac Inflight Innovations
> + *
>   * Initially based on: drivers/gpu/drm/i2c/tda998x_drv.c
>   *
>   * Copyright (C) 2012 Texas Instruments
> --
> 1.7.10.4
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/etnaviv: add etnaviv cooling device

2017-03-20 Thread Chris Healy
I don't have any input on this binary divider subject but I do want to
bring up some observations regarding Etnaviv GPU power management that
seems relevant.

I've done some comparisons between the Freescale Vivante GPU driver
stack (1) and the Marvell PXA1928 Vivante GPU driver stack (2) and see
more functionality in the PXA1928 stack than the Freescale i.MX6 stack
that may be of value for Etnaviv.  When I look at the Marvell PXA1928
Vivante GPU driver stack, (2) I see "gpufreq" code (3) that includes
support for conservative, ondemand, performance, powersave, and
userspace governors.  Additionally, AFAIK the key feature needed to
support a gpufreq driver and associated governors is to be able to
know what the load is on the GPU.  When looking at the PXA1928 driver,
it seems that it is looking at some load counters within the GPU that
are likely to be common across platforms.  (Check
"gpufreq_get_gpu_load" (4) in gpufreq.c.)

Also, given the wealth of counters present in the 3DGPU and my
understanding that there are 3 different controllable GPU frequencies
(at least with the i.MX6), it seems that one could dynamically adjust
each of these 3 different controllable frequencies independently based
on associated load counters.  The i.MX6 has 3 different frequencies,
IIRC, AXI, 3DGPU core, and 3DGPU shader.  I believe there are counters
associated with each of these GPU sub-blocks so it seems feasible to
adjust each of the 3 buses based on the sub-block load.  (I'm no
expert by any means with any of this so this may be crazy talk...)

If my observations are correct that the gpufreq functionality present
in the PXA1928 driver is portable across SoC platforms with the
Vivante 3D GPUs, does it make sense to add a gpufreq driver with the
Etnaviv driver?

What are the benefits and drawbacks of implementing a gpufreq driver
with associated governors in comparison to adding this cooling device
driver functionality?  (It seems to me that a gpufreq driver is more
proactive and the cooling device is more reactive.)

Can and should gpufreq driver functionality (such as that present in
the PXA1928 driver) and the proposed cooling device functionality
co-exist?

(1) - https://github.com/etnaviv/vivante_kernel_drivers/tree/master/imx6_v4_0_0
(2) - https://github.com/etnaviv/vivante_kernel_drivers/tree/master/pxa1928
(3) - 
https://github.com/etnaviv/vivante_kernel_drivers/tree/master/pxa1928/hal/os/linux/kernel/gpufreq
(4) - 
https://github.com/etnaviv/vivante_kernel_drivers/blob/master/pxa1928/hal/os/linux/kernel/gpufreq/gpufreq.c#L1294

On Wed, Mar 15, 2017 at 7:05 AM, Russell King - ARM Linux
 wrote:
> On Wed, Mar 15, 2017 at 02:03:09PM +0100, Lucas Stach wrote:
>> Am Sonntag, den 12.03.2017, 19:00 + schrieb Russell King:
>> > Each Vivante GPU contains a clock divider which can divide the GPU clock
>> > by 2^n, which can lower the power dissipation from the GPU.  It has been
>> > suggested that the GC600 on Dove is responsible for 20-30% of the power
>> > dissipation from the SoC, so lowering the GPU clock rate provides a way
>> > to throttle the power dissiptation, and reduce the temperature when the
>> > SoC gets hot.
>> >
>> > This patch hooks the Etnaviv driver into the kernel's thermal management
>> > to allow the GPUs to be throttled when necessary, allowing a reduction in
>> > GPU clock rate from /1 to /64 in power of 2 steps.
>>
>> Are those power of 2 steps a hardware limitation, or is it something you
>> implemented this way to get a smaller number of steps, with a more
>> meaningful difference in clock speed?
>> My understanding was that the FSCALE value is just a regular divider
>> with all steps values in the range of 1-64 being usable.
>
> I don't share your understanding.  The Vivante GAL kernel driver only
> ever sets power-of-two values.  I have no evidence to support your
> suggestion.
>
> There's evidence that says your understanding is incorrect however.
> It isn't a divider.  A value of 0x40 gives the fastest clock rate,
> a value of 0x01 gives the slowest.  If it were a binary divider,
> a value of 0x7f would give the slowest rate - so why doesn't Vivante
> use that in galcore when putting the GPU into idle/lower power - why
> do they just 0x01.
>
> This all leads me to believe that it's not a binary divider, but a
> set of bits that select the clock from a set of divide-by-two stages,
> and having more than one bit set is invalid.
>
> However, without definitive information from Vivante, we'll never
> really know.  We're unlikely to get that.
>
> --
> RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
> according to speedtest.net.
> ___
> etnaviv mailing list
> etna...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/etnaviv
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.free

Re: [PATCH 1/3] drm/etnaviv: submit support for in-fences

2017-03-17 Thread Chris Healy
On Fri, Mar 17, 2017 at 7:58 AM, Lucas Stach  wrote:
> Am Freitag, den 17.03.2017, 14:42 + schrieb Russell King - ARM
> Linux:
>> On Fri, Mar 17, 2017 at 03:10:21PM +0100, Lucas Stach wrote:
>> > Am Donnerstag, den 16.03.2017, 12:05 +0100 schrieb Philipp Zabel:
>> > > Hi Gustavo,
>> > >
>> > > On Mon, 2017-03-13 at 14:37 -0300, Gustavo Padovan wrote:
>> > > [...]
>> > > > I was thinking on some function that would iterate over all fences in
>> > > > the fence_array and check their context. The if we find our own gpu
>> > > > context in there we fail the submit.
>> > >
>> > > Why would we have to fail if somebody feeds us our own fences? Wouldn't
>> > > it be enough to just wait if there are foreign fences in the array?
>> >
>> > Yes, skipping the wait if all fences are from our own context is an
>> > optimization and it's certainly not an issue if someone feeds us our own
>> > fences.
>>
>> Are you sure about that - what if we have two GPUs, a 2D and 3D GPU,
>> and we're fed an etnaviv fence for one GPU when submitting to the
>> other GPU.
>>
>> So we do end up being fed our own fences, and we have to respect them
>> otherwise we lose inter-GPU synchronisation, and that will break
>> existing userspace.
>>
> The etnaviv GPUs, while being on the same DRM device, have distinct
> fence contexts. So the 3D GPU will consider a fence from the 2D GPU as
> foreign and properly wait on it.
>
> It's only when we get an in fence that has been generated as an out
> fence by one (or multiple) submits to the same GPU, that we are able to
> skip the wait and enqueue the command without waiting for the fence to
> signal.

With regard to the 2D and 3D GPU case, it seems to me that a good
example use case would be where the 3D GPU is used in Android for all
the surface generation using OpenGL and then the 2D GPU would get used
to composite all those surfaces together, leaving the 3D open to work
on other stuff.  As I understand it, the 2D GPU is much faster at 2D
compositing than the 3D GPU would be, (not to mention less power
hungry.)

>
> Regards,
> Lucas
>
> ___
> etnaviv mailing list
> etna...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/etnaviv
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel