Re: [bugzilla-dae...@bugzilla.kernel.org: [Bug 213519] New: WARNING on system reboot in: drivers/gpu/drm/i915/intel_runtime_pm.c:635 intel_runtime_pm_driver_release]

2021-06-21 Thread Joonas Lahtinen
Hi Joel,

That seems like a genuine bug. Could you file it at the i915 bug tracker
with all the requested information to make sure we can take a look at
it:

https://gitlab.freedesktop.org/drm/intel/-/wikis/How-to-file-i915-bugs

Are you able to try different kernel versions to bisect which kernel
version/commit introduced the WARN?

Regards, Joonas

Quoting Bjorn Helgaas (2021-06-22 00:51:32)
> [+cc Joel (reporter)]
> 
> On Mon, Jun 21, 2021 at 04:50:14PM -0500, Bjorn Helgaas wrote:
> > - Forwarded message from bugzilla-dae...@bugzilla.kernel.org -
> > 
> > Date: Mon, 21 Jun 2021 02:50:09 +
> > From: bugzilla-dae...@bugzilla.kernel.org
> > To: bj...@helgaas.com
> > Subject: [Bug 213519] New: WARNING on system reboot in:
> >   drivers/gpu/drm/i915/intel_runtime_pm.c:635 
> > intel_runtime_pm_driver_release
> > Message-ID: 
> > 
> > https://bugzilla.kernel.org/show_bug.cgi?id=213519
> > 
> > Bug ID: 213519
> >Summary: WARNING on system reboot in:
> > drivers/gpu/drm/i915/intel_runtime_pm.c:635
> > intel_runtime_pm_driver_release
> >Product: Drivers
> >Version: 2.5
> > Kernel Version: 5.12.12
> >   Hardware: x86-64
> > OS: Linux
> >   Tree: Mainline
> > Status: NEW
> >   Severity: normal
> >   Priority: P1
> >  Component: PCI
> >   Assignee: drivers_...@kernel-bugs.osdl.org
> >   Reporter: j-c...@westvi.com
> > Regression: No
> > 
> > Created attachment 297517
> >   --> https://bugzilla.kernel.org/attachment.cgi?id=297517=edit
> > Contents of 'warning' stack trace, etc.
> > 
> > As mentioned in summary - warning message in this routine at system reboot. 
> > Try
> > as I might, I cannot include the text of the warning directly here in the
> > description without losing carriage returns, so I include it as a text
> > attachment.
> > 
> > - End forwarded message -
> > 
> > [Attachment contents below]
> > 
> > [  239.019148] [ cut here ]
> > [  239.024226] i915 :00:02.0: i915 raw-wakerefs=1 wakelocks=1 on cleanup
> > [  239.031561] WARNING: CPU: 4 PID: 2484 at 
> > drivers/gpu/drm/i915/intel_runtime_pm.c:635 
> > intel_runtime_pm_driver_release+0x4f/0x60
> > [  239.043974] Modules linked in: mei_wdt x86_pkg_temp_thermal 
> > ghash_clmulni_intel mei_me mei cryptd
> > [  239.053656] CPU: 4 PID: 2484 Comm: reboot Not tainted 5.12.12 #1
> > [  239.060236] Hardware name: To Be Filled By O.E.M. To Be Filled By 
> > O.E.M./NUC-8665UE, BIOS P1.50 06/04/2021
> > [  239.070766] RIP: 0010:intel_runtime_pm_driver_release+0x4f/0x60
> > [  239.077256] Code: 10 4c 8b 6f 50 4d 85 ed 75 03 4c 8b 2f e8 59 8f 11 00 
> > 41 89 d8 44 89 e1 4c 89 ea 48 89 c6 48 c7 c7 f8 25 7d b0 e8 06 e8 67 00 
> > <0f> 0b 5b 41 5c 41 5d 5d c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 48
> > [  239.097700] RSP: 0018:b8c682f3bd30 EFLAGS: 00010286
> > [  239.103422] RAX:  RBX: 0001 RCX: 
> > b0af01e8
> > [  239.85] RDX:  RSI: dfff RDI: 
> > b0a401e0
> > [  239.118850] RBP: b8c682f3bd48 R08:  R09: 
> > b8c682f3bb08
> > [  239.126617] R10: b8c682f3bb00 R11: b0b20228 R12: 
> > 0001
> > [  239.134390] R13: 978680d114b0 R14: 97868197eae8 R15: 
> > fee1dead
> > [  239.142203] FS:  7f741a182580() GS:9789dc50() 
> > knlGS:
> > [  239.151044] CS:  0010 DS:  ES:  CR0: 80050033
> > [  239.157318] CR2: 0169f4c8 CR3: 00019cf14003 CR4: 
> > 003706e0
> > [  239.165098] DR0:  DR1:  DR2: 
> > 
> > [  239.172874] DR3:  DR6: fffe0ff0 DR7: 
> > 0400
> > [  239.180658] Call Trace:
> > [  239.183346]  i915_driver_shutdown+0xcf/0xe0
> > [  239.187920]  i915_pci_shutdown+0x10/0x20
> > [  239.192181]  pci_device_shutdown+0x35/0x60
> > [  239.196629]  device_shutdown+0x156/0x1b0
> > [  239.200827]  __do_sys_reboot.cold+0x2f/0x5b
> > [  239.205410]  __x64_sys_reboot+0x16/0x20
> > [  239.209586]  do_syscall_64+0x38/0x50
> > [  239.213399]  entry_SYSCALL_64_after_hwframe+0x44/0xae
> > [  239.218837] RIP: 0033:0x7f741a0a9bc3
> > [  239.222740] Code: 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 
> > 0f 1f 44 00 00 89 fa be 69 19 12 28 bf ad de e1 fe b8 a9 00 00 00 0f 05 
> > <48> 3d 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 71 c2 0c 00 f7 d8
> > [  239.243228] RSP: 002b:7ffcc2a16488 EFLAGS: 0206 ORIG_RAX: 
> > 00a9
> > [  239.251503] RAX: ffda RBX: 7ffcc2a165d8 RCX: 
> > 7f741a0a9bc3
> > [  239.259304] RDX: 01234567 RSI: 28121969 RDI: 
> > fee1dead
> > [  239.267105] RBP: 0004 R08:  R09: 
> > 0169e2e0
> > [  239.274926] R10: fd06 R11: 

Re: [PATH 0/4] [RFC] Support virtual DRM

2021-06-21 Thread Esaki Tomohito
Hi, Maxime
Thank you for reply.

On 2021/06/21 18:24, Maxime Ripard wrote:
> Hi,
> 
> On Mon, Jun 21, 2021 at 09:10:19AM +0200, Thomas Zimmermann wrote:
>> Am 21.06.21 um 08:27 schrieb Tomohito Esaki:
>>> Virtual DRM splits the overlay planes of a display controller into multiple
>>> virtual devices to allow each plane to be accessed by each process.
>>>
>>> This makes it possible to overlay images output from multiple processes on a
>>> display. For example, one process displays the camera image without 
>>> compositor
>>> while another process overlays the UI.
>>
>> I briefly looked over your patches. I didn't understand how this is
>> different to the functionality of a compositor? Shouldn't this be solved in
>> userspace?
> 
> I think there could be a bunch of use-cases for something that could
> "steal" a plane without the compositor knowing.
> 
> Something I'd really like to work at some point for example is that the
> downstream RaspberryPi display driver has a visual clue when it's
> running too hot or is in over-current.
> 
> I don't think this is the right solution though. The DT binding makes it
> far too static, and if there's a compositor I'd assume it would want to
> know about it somehow (at least if it's from the userspace) ?
> 

I will reconsider the DT bindings.

We want to separate the resources from the master in units of planes,
so we proposed virtual DRM.
By separating the plane from the master and making it appear as
a virtual DRM devicein userland, the plane can be accessed from
userland using the general DRM API.
What do you think about this idea?

Best Regards
Tomohito Esaki


Re: [PATH 3/4] dt-bindings: display: Add virtual DRM

2021-06-21 Thread Esaki Tomohito
Hi, Rob

Thank you for the error report and advice.
I will recheck DT binding.

Best regards
Tomohito Esaki

On 2021/06/22 2:40, Rob Herring wrote:
> On Mon, 21 Jun 2021 15:44:02 +0900, Tomohito Esaki wrote:
>> Add device tree bindings documentation for virtual DRM.
>>
>> Signed-off-by: Tomohito Esaki 
>> ---
>>  .../devicetree/bindings/display/vdrm.yaml | 67 +++
>>  1 file changed, 67 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/display/vdrm.yaml
>>
> 
> My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check'
> on your patch (DT_CHECKER_FLAGS is new in v5.13):
> 
> yamllint warnings/errors:
> ./Documentation/devicetree/bindings/display/vdrm.yaml:39:1: [error] syntax 
> error: found character '\t' that cannot start any token (syntax)
> 
> dtschema/dtc warnings/errors:
> make[1]: *** Deleting file 
> 'Documentation/devicetree/bindings/display/vdrm.example.dts'
> Traceback (most recent call last):
>   File "/usr/local/bin/dt-extract-example", line 45, in 
> binding = yaml.load(open(args.yamlfile, encoding='utf-8').read())
>   File "/usr/local/lib/python3.8/dist-packages/ruamel/yaml/main.py", line 
> 434, in load
> return constructor.get_single_data()
>   File "/usr/local/lib/python3.8/dist-packages/ruamel/yaml/constructor.py", 
> line 120, in get_single_data
> node = self.composer.get_single_node()
>   File "_ruamel_yaml.pyx", line 706, in _ruamel_yaml.CParser.get_single_node
>   File "_ruamel_yaml.pyx", line 724, in _ruamel_yaml.CParser._compose_document
>   File "_ruamel_yaml.pyx", line 775, in _ruamel_yaml.CParser._compose_node
>   File "_ruamel_yaml.pyx", line 889, in 
> _ruamel_yaml.CParser._compose_mapping_node
>   File "_ruamel_yaml.pyx", line 775, in _ruamel_yaml.CParser._compose_node
>   File "_ruamel_yaml.pyx", line 889, in 
> _ruamel_yaml.CParser._compose_mapping_node
>   File "_ruamel_yaml.pyx", line 775, in _ruamel_yaml.CParser._compose_node
>   File "_ruamel_yaml.pyx", line 889, in 
> _ruamel_yaml.CParser._compose_mapping_node
>   File "_ruamel_yaml.pyx", line 775, in _ruamel_yaml.CParser._compose_node
>   File "_ruamel_yaml.pyx", line 889, in 
> _ruamel_yaml.CParser._compose_mapping_node
>   File "_ruamel_yaml.pyx", line 775, in _ruamel_yaml.CParser._compose_node
>   File "_ruamel_yaml.pyx", line 889, in 
> _ruamel_yaml.CParser._compose_mapping_node
>   File "_ruamel_yaml.pyx", line 731, in _ruamel_yaml.CParser._compose_node
>   File "_ruamel_yaml.pyx", line 904, in _ruamel_yaml.CParser._parse_next_event
> ruamel.yaml.scanner.ScannerError: while scanning a plain scalar
>   in "", line 38, column 15
> found a tab character that violates indentation
>   in "", line 39, column 1
> make[1]: *** [Documentation/devicetree/bindings/Makefile:20: 
> Documentation/devicetree/bindings/display/vdrm.example.dts] Error 1
> make[1]: *** Waiting for unfinished jobs
> ./Documentation/devicetree/bindings/display/vdrm.yaml:  while scanning a 
> plain scalar
>   in "", line 38, column 15
> found a tab character that violates indentation
>   in "", line 39, column 1
> /builds/robherring/linux-dt-review/Documentation/devicetree/bindings/display/vdrm.yaml:
>  ignoring, error parsing file
> warning: no schema found in file: 
> ./Documentation/devicetree/bindings/display/vdrm.yaml
> make: *** [Makefile:1416: dt_binding_check] Error 2
> \ndoc reference errors (make refcheckdocs):
> 
> See https://patchwork.ozlabs.org/patch/1494913
> 
> This check can fail if there are any dependencies. The base for a patch
> series is generally the most recent rc1.
> 
> If you already ran 'make dt_binding_check' and didn't see the above
> error(s), then make sure 'yamllint' is installed and dt-schema is up to
> date:
> 
> pip3 install dtschema --upgrade
> 
> Please check and re-submit.
> 


Re: [PATH 1/4] drm: Add Virtual DRM device driver

2021-06-21 Thread Esaki Tomohito
Hi, Sam

Thank you for looking at the details.
I will fix it according to your comment.

Best regards
Tomohito Esaki

On 2021/06/22 0:55, Sam Ravnborg wrote:
> Hi Tomohito
> 
> On Mon, Jun 21, 2021 at 03:44:00PM +0900, Tomohito Esaki wrote:
>> Virtual DRM splits the resources of an overlay plane into multiple
>> virtual devices to allow each plane to be accessed by each process.
>>
>> This makes it possible to overlay images output from multiple processes
>> on a display. For example, one process displays the camera image without
>> compositor while another process overlays the compositor's drawing of
>> the UI.
>>
>> The virtual DRM creates standalone virtual device and make DRM planes
>> from a master device (e.g. card0) accessible via one or more virtual
>> devices. However, these plane are no longer accessible from the original
>> device.
>> Each virtual device (and plane) can be accessed via a separate
>> device file.
>>
>> Signed-off-by: Tomohito Esaki 
>> ---
>>  drivers/gpu/drm/Kconfig |   7 +
>>  drivers/gpu/drm/Makefile|   1 +
>>  drivers/gpu/drm/vdrm/vdrm_api.h |  68 +++
>>  drivers/gpu/drm/vdrm/vdrm_drv.c | 859 
>>  drivers/gpu/drm/vdrm/vdrm_drv.h |  80 +++
> 
> Plase consider making the header files self-contained.
> So there are no hdden dependencies between the two.
> 
> Use forward declarations rahter than including header files is possible.
> 
> A few trivial comments in the following. I did not try to follow all the
> functionality of the driver and I expect others to comment on the idea.
> 
>   Sam
> 
>>  5 files changed, 1015 insertions(+)
>>  create mode 100644 drivers/gpu/drm/vdrm/vdrm_api.h
>>  create mode 100644 drivers/gpu/drm/vdrm/vdrm_drv.c
>>  create mode 100644 drivers/gpu/drm/vdrm/vdrm_drv.h
>>
>> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
>> index 3c16bd1afd87..ba7f4eeab385 100644
>> --- a/drivers/gpu/drm/Kconfig
>> +++ b/drivers/gpu/drm/Kconfig
>> @@ -294,6 +294,13 @@ config DRM_VKMS
>>  
>>If M is selected the module will be called vkms.
>>  
>> +config DRM_VDRM
>> +tristate "Virtual DRM"
>> +depends on DRM
>> +help
>> +  Virtual DRM splits the resources of an overlay plane into multiple
>> +  virtual devices to allow each plane to be accessed by each process.
> Could you look into pulling a bit more info here. You made a very nice
> intro to the patch, consider using it in the help text too.
> 
> 
>> +
>>  source "drivers/gpu/drm/exynos/Kconfig"
>>  
>>  source "drivers/gpu/drm/rockchip/Kconfig"
>> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
>> index 5279db4392df..55dbf85e2579 100644
>> --- a/drivers/gpu/drm/Makefile
>> +++ b/drivers/gpu/drm/Makefile
>> @@ -82,6 +82,7 @@ obj-$(CONFIG_DRM_VMWGFX)+= vmwgfx/
>>  obj-$(CONFIG_DRM_VIA)   +=via/
>>  obj-$(CONFIG_DRM_VGEM)  += vgem/
>>  obj-$(CONFIG_DRM_VKMS)  += vkms/
>> +obj-$(CONFIG_DRM_VDRM)  += vdrm/
> Alphabetic order (mostly) so before vgem/
> 
>>  obj-$(CONFIG_DRM_NOUVEAU) +=nouveau/
>>  obj-$(CONFIG_DRM_EXYNOS) +=exynos/
>>  obj-$(CONFIG_DRM_ROCKCHIP) +=rockchip/
>> diff --git a/drivers/gpu/drm/vdrm/vdrm_api.h 
>> b/drivers/gpu/drm/vdrm/vdrm_api.h
>> new file mode 100644
>> index ..dd4d7e774800
>> --- /dev/null
>> +++ b/drivers/gpu/drm/vdrm/vdrm_api.h
>> @@ -0,0 +1,68 @@
>> +/* SPDX-License-Identifier: GPL-2.0+ */
>> +/*
>> + * vdrm_api.h -- Virtual DRM API
>> + *
>> + * Copyright (C) 2021 Renesas Electronics Corporation
>> + */
>> +
>> +#ifndef __VDRM_API__
>> +#define __VDRM_API__
>> +
>> +#include 
>> +#include 
>> +
>> +/**
>> + * struct vdrm_property_info - Information about the properties passed from
>> + * the DRM driver to vDRM
>> + * @prop: Parent property to pass to vDRM
>> + * @default_val: Default value for the property passed to vDRM
>> + */
>> +struct vdrm_property_info {
>> +struct drm_property *prop;
>> +uint64_t default_val;
>> +};
> It would be nice that all structs used inline comments - and then you
> are consistent too.
> 
>> +
>> +/**
>> + * struct vdrm_funcs - Callbacks to parent DRM driver
>> + */
>> +struct vdrm_funcs {
>> +/**
>> + * @dumb_create:
>> + *
>> + * Called by _driver.dumb_create. Please read the documentation
>> + * for the _driver.dumb_create hook for more details.
>> + */
>> +int (*dumb_create)(struct drm_file *file, struct drm_device *dev,
>> +   struct drm_mode_create_dumb *args);
>> +
>> +/**
>> + * @crtc_flush:
>> + *
>> + * Called by _crtc_helper_funcs.atomic_flush. Please read the
>> + * documentation for the _crtc_helper_funcs.atomic_flush hook for
>> + * more details.
>> + */
>> +void (*crtc_flush)(struct drm_crtc *crtc);
>> +};
>> +
>> +struct vdrm_device;
>> +struct vdrm_display;
>> +
>> +void vdrm_drv_handle_vblank(struct vdrm_display *vdisplay);
>> +void vdrm_drv_finish_page_flip(struct vdrm_display *vdisplay);

Re: [PATH 0/4] [RFC] Support virtual DRM

2021-06-21 Thread Esaki Tomohito
Hi, Enrico Weigelt
Thank you for reply.

On 2021/06/22 1:05, Enrico Weigelt, metux IT consult wrote:
> On 21.06.21 08:27, Tomohito Esaki wrote:
> 
> Hi,
> 
>> Virtual DRM splits the overlay planes of a display controller into multiple
>> virtual devices to allow each plane to be accessed by each process.
>>
>> This makes it possible to overlay images output from multiple processes on a
>> display. For example, one process displays the camera image without 
>> compositor
>> while another process overlays the UI.
> 
> Are you attempting to create an simple in-kernel compositor ?

I think the basic idea is the same as DRMlease.
We want to separate the resources from the master in units of planes,
so we proposed virtual DRM.
I think the advantage of vDRM is that you can use general DRM APIs
in userland.

> I don't think that's not the way to go, at least not by touching each
> single display driver, and not hardcoding the planes in DT.

Thank you for comment. I will reconsider about DT.

> What's the actual use case you're doing that for ? Why not using some
> userland compositor ?

I think when latency is important (e.g., AR, VR, for displaying camera
images in IVI systems), there may be use cases where the compositor
cannot be used.
Normally, when the image is passed through the compositor, it is
displayed after 2 VSYNC at most, because the compositor combines the
image with VSYNC synchronization. On the other hand, if we use vDRM, the
image will be displayed at the next VSYNC, so it will be displayed after
1 VSYNC at most.

Also, since the compositor is a single point of failure, we may not want
to make it dependent on it.

Best regards
Tomohito Esaki


Re: [PATH 0/4] [RFC] Support virtual DRM

2021-06-21 Thread Esaki Tomohito
Hi, Thomas
Thank you for reply.

On 2021/06/21 16:10, Thomas Zimmermann wrote:
> Hi
> 
> Am 21.06.21 um 08:27 schrieb Tomohito Esaki:
>> Virtual DRM splits the overlay planes of a display controller into
>> multiple
>> virtual devices to allow each plane to be accessed by each process.
>>
>> This makes it possible to overlay images output from multiple
>> processes on a
>> display. For example, one process displays the camera image without
>> compositor
>> while another process overlays the UI.
> 
> I briefly looked over your patches. I didn't understand how this is
> different to the functionality of a compositor? Shouldn't this be solved
> in userspace?

I think when latency is important (e.g., AR, VR, for displaying camera
images in IVI systems), there may be use cases where the compositor
cannot be used.
Normally, when the image is passed through the compositor, it is
displayed after 2 VSYNC at most, because the compositor combines the
image with VSYNC synchronization. On the other hand, if we use vDRM, the
image will be displayed at the next VSYNC, so it will be displayed after
1 VSYNC at most.

Also, since the compositor is a single point of failure, we may not want
to make it dependent on it.

Best regards
Tomohito Esaki


linux-next: manual merge of the devicetree tree with Linus' and the drm, arm-soc trees

2021-06-21 Thread Stephen Rothwell
Hi all,

Today's linux-next merge of the devicetree tree got conflicts in:

  Documentation/devicetree/bindings/display/bridge/cdns,mhdp8546.yaml
  Documentation/devicetree/bindings/media/renesas,drif.yaml
  Documentation/devicetree/bindings/net/stm32-dwmac.yaml

between commits:

  7169d082e7e6 ("dt-bindings: drm/bridge: MHDP8546 bridge binding changes for 
HDCP")
  8929ef8d4dfd ("media: dt-bindings: media: renesas,drif: Fix fck definition")
  fea998229140 ("dt-bindings: net: document ptp_ref clk in dwmac")

from Linus' and the drm, arm-soc trees and commit:

  972d6a7dcec3 ("dt-bindings: Drop redundant minItems/maxItems")

from the devicetree tree.

I fixed it up (I used one side or the other, please check when
linux-next is released) and can carry the fix as necessary. This is now
fixed as far as linux-next is concerned, but any non trivial conflicts
should be mentioned to your upstream maintainer when your tree is
submitted for merging.  You may also want to consider cooperating with
the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell


pgp0NmJpc9XQP.pgp
Description: OpenPGP digital signature


[PATCH v3 2/2] drm/bridge: ti-sn65dsi86: Implement the pwm_chip

2021-06-21 Thread Bjorn Andersson
The SN65DSI86 provides the ability to supply a PWM signal on GPIO 4,
with the primary purpose of controlling the backlight of the attached
panel. Add an implementation that exposes this using the standard PWM
framework, to allow e.g. pwm-backlight to expose this to the user.

Signed-off-by: Bjorn Andersson 
---

Changes since v2:
- Corrected calculation of scale, to include a 1 instead of 1/NSEC_TO_SEC and
  rounded the period up in get_state, to make sure its idempotent
- Changed duty_cycle calculation to make sure it idempotent over my tested 
period
- Documented "Limitations"
- Documented muxing operation after pm_runtime_get_sync()

 drivers/gpu/drm/bridge/ti-sn65dsi86.c | 335 +-
 1 file changed, 334 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c 
b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
index 5d712c8c3c3b..0eabbdad1830 100644
--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
@@ -4,6 +4,7 @@
  * datasheet: https://www.ti.com/lit/ds/symlink/sn65dsi86.pdf
  */
 
+#include 
 #include 
 #include 
 #include 
@@ -15,6 +16,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -91,6 +93,13 @@
 #define SN_ML_TX_MODE_REG  0x96
 #define  ML_TX_MAIN_LINK_OFF   0
 #define  ML_TX_NORMAL_MODE BIT(0)
+#define SN_PWM_PRE_DIV_REG 0xA0
+#define SN_BACKLIGHT_SCALE_REG 0xA1
+#define  BACKLIGHT_SCALE_MAX   0x
+#define SN_BACKLIGHT_REG   0xA3
+#define SN_PWM_EN_INV_REG  0xA5
+#define  SN_PWM_INV_MASK   BIT(0)
+#define  SN_PWM_EN_MASKBIT(1)
 #define SN_AUX_CMD_STATUS_REG  0xF4
 #define  AUX_IRQ_STATUS_AUX_RPLY_TOUT  BIT(3)
 #define  AUX_IRQ_STATUS_AUX_SHORT  BIT(5)
@@ -113,11 +122,14 @@
 
 #define SN_LINK_TRAINING_TRIES 10
 
+#define SN_PWM_GPIO_IDX3 /* 4th GPIO */
+
 /**
  * struct ti_sn65dsi86 - Platform data for ti-sn65dsi86 driver.
  * @bridge_aux:   AUX-bus sub device for MIPI-to-eDP bridge functionality.
  * @gpio_aux: AUX-bus sub device for GPIO controller functionality.
  * @aux_aux:  AUX-bus sub device for eDP AUX channel functionality.
+ * @pwm_aux:  AUX-bus sub device for PWM controller functionality.
  *
  * @dev:  Pointer to the top level (i2c) device.
  * @regmap:   Regmap for accessing i2c.
@@ -145,11 +157,17 @@
  *bitmap so we can do atomic ops on it without an extra
  *lock so concurrent users of our 4 GPIOs don't stomp on
  *each other's read-modify-write.
+ *
+ * @pchip:pwm_chip if the PWM is exposed.
+ * @pwm_enabled:  Used to track if the PWM signal is currently enabled.
+ * @pwm_refclk_freq: Cache for the reference clock input to the PWM.
+ * @pwm_pin_busy: Track if GPIO4 is currently requested for GPIO or PWM.
  */
 struct ti_sn65dsi86 {
struct auxiliary_device bridge_aux;
struct auxiliary_device gpio_aux;
struct auxiliary_device aux_aux;
+   struct auxiliary_device pwm_aux;
 
struct device   *dev;
struct regmap   *regmap;
@@ -172,6 +190,12 @@ struct ti_sn65dsi86 {
struct gpio_chipgchip;
DECLARE_BITMAP(gchip_output, SN_NUM_GPIOS);
 #endif
+#if defined(CONFIG_PWM)
+   struct pwm_chip pchip;
+   boolpwm_enabled;
+   unsigned intpwm_refclk_freq;
+   atomic_tpwm_pin_busy;
+#endif
 };
 
 static const struct regmap_range ti_sn65dsi86_volatile_ranges[] = {
@@ -190,6 +214,25 @@ static const struct regmap_config 
ti_sn65dsi86_regmap_config = {
.cache_type = REGCACHE_NONE,
 };
 
+static int ti_sn65dsi86_read_u16(struct ti_sn65dsi86 *pdata,
+unsigned int reg, u16 *val)
+{
+   unsigned int tmp;
+   int ret;
+
+   ret = regmap_read(pdata->regmap, reg, );
+   if (ret)
+   return ret;
+   *val = tmp;
+
+   ret = regmap_read(pdata->regmap, reg + 1, );
+   if (ret)
+   return ret;
+   *val |= tmp << 8;
+
+   return 0;
+}
+
 static void ti_sn65dsi86_write_u16(struct ti_sn65dsi86 *pdata,
   unsigned int reg, u16 val)
 {
@@ -253,6 +296,14 @@ static void ti_sn_bridge_set_refclk_freq(struct 
ti_sn65dsi86 *pdata)
 
regmap_update_bits(pdata->regmap, SN_DPPLL_SRC_REG, REFCLK_FREQ_MASK,
   REFCLK_FREQ(i));
+
+#if defined(CONFIG_PWM)
+   /*
+* The PWM refclk is based on the value written to SN_DPPLL_SRC_REG,
+* regardless of its actual sourcing.
+*/
+   pdata->pwm_refclk_freq = ti_sn_bridge_refclk_lut[i];
+#endif
 }
 
 static void 

[PATCH v3 1/2] pwm: Introduce single-PWM of_xlate function

2021-06-21 Thread Bjorn Andersson
The existing pxa driver and the upcoming addition of PWM support in the
TI sn565dsi86 DSI/eDP bridge driver both has a single PWM channel and
thereby a need for a of_xlate function with the period as its single
argument.

Introduce a common helper function in the core that can be used as
of_xlate by such drivers and migrate the pxa driver to use this.

Signed-off-by: Bjorn Andersson 
---

Changes since v2:
- None

 drivers/pwm/core.c| 26 ++
 drivers/pwm/pwm-pxa.c | 16 +---
 include/linux/pwm.h   |  2 ++
 3 files changed, 29 insertions(+), 15 deletions(-)

diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
index a42999f877d2..5e9c876fccc4 100644
--- a/drivers/pwm/core.c
+++ b/drivers/pwm/core.c
@@ -152,6 +152,32 @@ of_pwm_xlate_with_flags(struct pwm_chip *pc, const struct 
of_phandle_args *args)
 }
 EXPORT_SYMBOL_GPL(of_pwm_xlate_with_flags);
 
+struct pwm_device *
+of_pwm_single_xlate(struct pwm_chip *pc, const struct of_phandle_args *args)
+{
+   struct pwm_device *pwm;
+
+   if (pc->of_pwm_n_cells < 1)
+   return ERR_PTR(-EINVAL);
+
+   /* validate that one cell is specified, optionally with flags */
+   if (args->args_count != 1 && args->args_count != 2)
+   return ERR_PTR(-EINVAL);
+
+   pwm = pwm_request_from_chip(pc, 0, NULL);
+   if (IS_ERR(pwm))
+   return pwm;
+
+   pwm->args.period = args->args[0];
+   pwm->args.polarity = PWM_POLARITY_NORMAL;
+
+   if (args->args_count == 2 && args->args[2] & PWM_POLARITY_INVERTED)
+   pwm->args.polarity = PWM_POLARITY_INVERSED;
+
+   return pwm;
+}
+EXPORT_SYMBOL_GPL(of_pwm_single_xlate);
+
 static void of_pwmchip_add(struct pwm_chip *chip)
 {
if (!chip->dev || !chip->dev->of_node)
diff --git a/drivers/pwm/pwm-pxa.c b/drivers/pwm/pwm-pxa.c
index cfb683827d32..8cd82fb54483 100644
--- a/drivers/pwm/pwm-pxa.c
+++ b/drivers/pwm/pwm-pxa.c
@@ -148,20 +148,6 @@ static const struct platform_device_id 
*pxa_pwm_get_id_dt(struct device *dev)
return id ? id->data : NULL;
 }
 
-static struct pwm_device *
-pxa_pwm_of_xlate(struct pwm_chip *pc, const struct of_phandle_args *args)
-{
-   struct pwm_device *pwm;
-
-   pwm = pwm_request_from_chip(pc, 0, NULL);
-   if (IS_ERR(pwm))
-   return pwm;
-
-   pwm->args.period = args->args[0];
-
-   return pwm;
-}
-
 static int pwm_probe(struct platform_device *pdev)
 {
const struct platform_device_id *id = platform_get_device_id(pdev);
@@ -187,7 +173,7 @@ static int pwm_probe(struct platform_device *pdev)
pwm->chip.npwm = (id->driver_data & HAS_SECONDARY_PWM) ? 2 : 1;
 
if (IS_ENABLED(CONFIG_OF)) {
-   pwm->chip.of_xlate = pxa_pwm_of_xlate;
+   pwm->chip.of_xlate = of_pwm_single_xlate;
pwm->chip.of_pwm_n_cells = 1;
}
 
diff --git a/include/linux/pwm.h b/include/linux/pwm.h
index 5a73251d28e3..6aff1fa4fe5d 100644
--- a/include/linux/pwm.h
+++ b/include/linux/pwm.h
@@ -411,6 +411,8 @@ struct pwm_device *pwm_request_from_chip(struct pwm_chip 
*chip,
 
 struct pwm_device *of_pwm_xlate_with_flags(struct pwm_chip *pc,
const struct of_phandle_args *args);
+struct pwm_device *of_pwm_single_xlate(struct pwm_chip *pc,
+  const struct of_phandle_args *args);
 
 struct pwm_device *pwm_get(struct device *dev, const char *con_id);
 struct pwm_device *of_pwm_get(struct device *dev, struct device_node *np,
-- 
2.31.0



Re: [RFC PATCH 7/9] arm64: dts: imx8mm: Add eLCDIF node support

2021-06-21 Thread Adam Ford
On Mon, Jun 21, 2021 at 2:25 AM Jagan Teki  wrote:
>
> Add eLCDIF controller node for i.MX8MM.
>
> Cc: Rob Herring 
> Signed-off-by: Jagan Teki 
> ---
>  arch/arm64/boot/dts/freescale/imx8mm.dtsi | 19 +++
>  1 file changed, 19 insertions(+)
>
> diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi 
> b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
> index fe5485ee9419..5f68182ed3a6 100644
> --- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
> +++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
> @@ -1030,6 +1030,25 @@ aips4: bus@32c0 {
> #size-cells = <1>;
> ranges = <0x32c0 0x32c0 0x40>;
>
> +   lcdif: lcdif@32e0 {
> +   compatible = "fsl,imx8mm-lcdif", 
> "fsl,imx6sx-lcdif";

Based on a comment I read from Marek [1] from this patch series for
the driver, I think fallback compatible should be fsl,imx28-lcdif.

"The iMX8MM and iMX8MN do not support the overlay plane, so they are MXSFB V4"

[1] - 
https://patchwork.kernel.org/project/dri-devel/patch/20210620224834.189411-1-ma...@denx.de/

adam

> +   reg = <0x32e0 0x1>;
> +   clocks = < IMX8MM_CLK_LCDIF_PIXEL>,
> +< IMX8MM_CLK_DISP_AXI_ROOT>,
> +< IMX8MM_CLK_DISP_APB_ROOT>;
> +   clock-names = "pix", "disp_axi", "axi";
> +   assigned-clocks = < 
> IMX8MM_CLK_LCDIF_PIXEL>,
> + < IMX8MM_CLK_DISP_AXI>,
> + < IMX8MM_CLK_DISP_APB>;
> +   assigned-clock-parents = < 
> IMX8MM_VIDEO_PLL1_OUT>,
> +< 
> IMX8MM_SYS_PLL2_1000M>,
> +< 
> IMX8MM_SYS_PLL1_800M>;
> +   assigned-clock-rate = <59400>, 
> <5>, <2>;
> +   interrupts = ;
> +   power-domains = <_blk_ctl 
> IMX8MM_BLK_CTL_PD_DISPMIX_LCDIF>;
> +   status = "disabled";
> +   };
> +
> dispmix_blk_ctl: blk-ctl@32e28000 {
> compatible = "fsl,imx8mm-dispmix-blk-ctl", 
> "syscon";
> reg = <0x32e28000 0x100>;
> --
> 2.25.1
>


Re: [RFC PATCH 8/9] arm64: dts: imx8mm: Add MIPI DSI pipeline

2021-06-21 Thread Adam Ford
On Mon, Jun 21, 2021 at 2:25 AM Jagan Teki  wrote:
>
> Add MIPI DSI pipeline for i.MX8MM.
>
> Video pipeline start from eLCDIF to MIPI DSI and respective
> Panel or Bridge on the backend side.
>
> Add support for it.
>
> Cc: Rob Herring 
> Signed-off-by: Jagan Teki 
> ---
>  arch/arm64/boot/dts/freescale/imx8mm.dtsi | 59 +++
>  1 file changed, 59 insertions(+)
>
> diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi 
> b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
> index 5f68182ed3a6..bc09fce0f6a9 100644
> --- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
> +++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
> @@ -1047,6 +1047,65 @@ lcdif: lcdif@32e0 {
> interrupts = ;
> power-domains = <_blk_ctl 
> IMX8MM_BLK_CTL_PD_DISPMIX_LCDIF>;
> status = "disabled";
> +
> +   port {
> +   lcdif_out_dsi: endpoint {
> +   remote-endpoint = 
> <_in_lcdif>;
> +   };
> +   };
> +   };
> +
> +   dsi: dsi@32e1 {
> +   compatible = "fsl,imx8mm-sec-dsim";
> +   reg = <0x32e1 0xa0>;
> +   clocks = < IMX8MM_CLK_DSI_CORE>,
> +< IMX8MM_CLK_DSI_PHY_REF>;
> +   clock-names = "bus", "phy_ref";
> +   assigned-clocks = < IMX8MM_CLK_DSI_CORE>,
> + < 
> IMX8MM_VIDEO_PLL1_OUT>,
> + < 
> IMX8MM_CLK_DSI_PHY_REF>;
> +   assigned-clock-parents = < 
> IMX8MM_SYS_PLL1_266M>,
> +< 
> IMX8MM_VIDEO_PLL1_BYPASS>,
> +< 
> IMX8MM_VIDEO_PLL1_OUT>;
> +   assigned-clock-rates = <26600>, 
> <59400>, <2700>;
> +   interrupts = ;
> +   phys = <>;
> +   phy-names = "dphy";
> +   power-domains = <_blk_ctl 
> IMX8MM_BLK_CTL_PD_DISPMIX_MIPI_DSI>;
> +   samsung,burst-clock-frequency = <89100>;
> +   samsung,esc-clock-frequency = <5400>;
> +   samsung,pll-clock-frequency = <2700>;
> +   status = "disabled";
> +
> +   ports {
> +   #address-cells = <1>;
> +   #size-cells = <0>;
> +
> +   port@0 {
> +   reg = <0>;
> +   #address-cells = <1>;
> +   #size-cells = <0>;
> +
> +   dsi_in_lcdif: endpoint@0 {
> +   reg = <0>;

When I build this with W=1, I get a warning:

Warning (graph_child_address):
/soc@0/bus@32c0/dsi@32e1/ports/port@0: graph node has single
child node 'endpoint@0', #address-cells/#size-cells are not necessary

Are there supposed to be two endpoints for port@0?

> +   remote-endpoint = 
> <_out_dsi>;
> +   };
> +   };
> +
> +   port@1 {
> +   reg = <1>;
> +   };
> +   };
> +   };
> +
> +   dphy: dphy@32e100a4 {
> +   compatible = "fsl,imx8mm-sec-dsim-dphy";
> +   reg = <0x32e100a4 0xbc>;
> +   clocks = < IMX8MM_CLK_DSI_PHY_REF>;
> +   clock-names = "phy_ref";
> +   #phy-cells = <0>;
> +   power-domains = <_blk_ctl 
> IMX8MM_BLK_CTL_PD_DISPMIX_MIPI_DPHY>;
> +   status = "disabled";
> };
>
> dispmix_blk_ctl: blk-ctl@32e28000 {
> --
> 2.25.1
>


Re: [PATCH v2] drm/panfrost:report the full raw fault information instead

2021-06-21 Thread Chunyou Tang
Hi Steve,
I will send a new patch with suitable subject/commit message.
But I send a V3 or a new patch?

I met a bug about the GPU,I have no idea about how to fix it,
If you can give me some suggestion,it is perfect.

You can see such kernel log:

Jun 20 10:20:13 icube kernel: [  774.566760] mvp_gpu :05:00.0: GPU
Fault 0x0088 (SHAREABILITY_FAULT) at 0x0310fd00 Jun 20
10:20:13 icube kernel: [  774.566764] mvp_gpu :05:00.0: There were
multiple GPU faults - some have not been reported Jun 20 10:20:13 icube
kernel: [  774.667542] mvp_gpu :05:00.0: AS_ACTIVE bit stuck Jun 20
10:20:13 icube kernel: [  774.767900] mvp_gpu :05:00.0: AS_ACTIVE
bit stuck Jun 20 10:20:13 icube kernel: [  774.868546] mvp_gpu
:05:00.0: AS_ACTIVE bit stuck Jun 20 10:20:13 icube kernel:
[  774.968910] mvp_gpu :05:00.0: AS_ACTIVE bit stuck Jun 20
10:20:13 icube kernel: [  775.069251] mvp_gpu :05:00.0: AS_ACTIVE
bit stuck Jun 20 10:20:22 icube kernel: [  783.693971] mvp_gpu
:05:00.0: gpu sched timeout, js=1, config=0x7300, status=0x8,
head=0x362c900, tail=0x362c100, sched_job=3252fb84

In
https://lore.kernel.org/dri-devel/20200510165538.19720-1-peron.c...@gmail.com/
there had a same bug like mine,and I found you at the mail list,I don't
know how it fixed?

I need your help!

thinks very much!

Chunyou

?? Mon, 21 Jun 2021 11:45:20 +0100
Steven Price  :

> On 19/06/2021 04:18, Chunyou Tang wrote:
> > Hi Steve,
> > 1,Now I know how to write the subject
> > 2,the low 8 bits is the exception type in spec.
> > 
> > and you can see prnfrost_exception_name()
> > 
> > switch (exception_code) {
> > /* Non-Fault Status code */
> > case 0x00: return "NOT_STARTED/IDLE/OK";
> > case 0x01: return "DONE";
> > case 0x02: return "INTERRUPTED";
> > case 0x03: return "STOPPED";
> > case 0x04: return "TERMINATED";
> > case 0x08: return "ACTIVE";
> > 
> > 
> > case 0xD8: return "ACCESS_FLAG";
> > case 0xD9 ... 0xDF: return "ACCESS_FLAG";
> > case 0xE0 ... 0xE7: return "ADDRESS_SIZE_FAULT";
> > case 0xE8 ... 0xEF: return "MEMORY_ATTRIBUTES_FAULT";
> > }
> > return "UNKNOWN";
> > }
> > 
> > the exception_code in case is only 8 bits,so if fault_status
> > in panfrost_gpu_irq_handler() don't & 0xFF,it can't get correct
> > exception reason,it will be always UNKNOWN.
> 
> Yes, I'm happy with the change - I just need a patch that I can apply.
> At the moment this patch only changes the first '0x%08x' output rather
> than the call to panfrost_exception_name() as well. So we just need a
> patch which does:
> 
> - fault_status & 0xFF, panfrost_exception_name(pfdev, fault_status),
> + fault_status, panfrost_exception_name(pfdev, fault_status & 0xFF),
> 
> along with a suitable subject/commit message describing the change. If
> you can send me that I can apply it.
> 
> Thanks,
> 
> Steve
> 
> PS. Sorry for going round in circles here - I'm trying to help you get
> setup so you'll be able to contribute patches easily in future. An
> important part of that is ensuring you can send a properly formatted
> patch to the list.
> 
> PPS. I'm still not receiving your emails directly. I don't think it's
> a problem at my end because I'm receiving other emails, but if you can
> somehow fix the problem you're likely to receive a faster response.
> 
> > ?? Fri, 18 Jun 2021 13:43:24 +0100
> > Steven Price  :
> > 
> >> On 17/06/2021 07:20, ChunyouTang wrote:
> >>> From: ChunyouTang 
> >>>
> >>> of the low 8 bits.
> >>
> >> Please don't split the subject like this. The first line of the
> >> commit should be a (very short) summary of the patch. Then a blank
> >> line and then a longer description of what the purpose of the
> >> patch is and why it's needed.
> >>
> >> Also you previously had this as part of a series (the first part
> >> adding the "& 0xFF" in the panfrost_exception_name() call). I'm not
> >> sure we need two patches for the single line, but as it stands this
> >> patch doesn't apply.
> >>
> >> Also I'm still not receiving any emails from you directly (only via
> >> the list), so it's possible I might have missed something you sent.
> >>
> >> Steve
> >>
> >>>
> >>> Signed-off-by: ChunyouTang 
> >>> ---
> >>>  drivers/gpu/drm/panfrost/panfrost_gpu.c | 2 +-
> >>>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c
> >>> b/drivers/gpu/drm/panfrost/panfrost_gpu.c index
> >>> 1fffb6a0b24f..d2d287bbf4e7 100644 ---
> >>> a/drivers/gpu/drm/panfrost/panfrost_gpu.c +++
> >>> b/drivers/gpu/drm/panfrost/panfrost_gpu.c @@ -33,7 +33,7 @@ static
> >>> irqreturn_t panfrost_gpu_irq_handler(int irq, void *data) address
> >>> |= gpu_read(pfdev, GPU_FAULT_ADDRESS_LO); 
> >>>   dev_warn(pfdev->dev, "GPU Fault 0x%08x (%s) at
> >>> 0x%016llx\n",
> >>> -  fault_status & 0xFF,
> >>> panfrost_exception_name(pfdev, fault_status & 0xFF),
> >>> +  fault_status,
> >>> 

Re: [PATCH] drm/panfrost:modify 'break' to 'continue' to traverse the circulation

2021-06-21 Thread Chunyou Tang
Hi Steve,
I make a mistake about the code branch,I will test it later,
thinks for your reply.

Chunyou

?? Mon, 21 Jun 2021 11:45:18 +0100
Steven Price  :

> On 19/06/2021 04:09, Chunyou Tang wrote:
> > Hi Steve,
> > 1,
> > from
> > https://lore.kernel.org/lkml/31644881-134a-2d6e-dddf-e658a3a81...@arm.com/
> > I can see what your sent,I used a wrong email address,Now it
> > correct. 2,
> >>> Unless I'm mistaken the situation where some mappings may be NULL
> >>> is caused by the loop in panfrost_lookup_bos() not completing
> >>> successfully
> >>> (panfrost_gem_mapping_get() returning NULL). In this case if
> >>> mappings[i]
> >>> is NULL then all following mappings must also be NULL. So 'break'
> >>> allows
> >>> us to skip the later ones. Admittedly the performance here isn't
> >>> important so I'm not sure it's worth the optimisation, but AIUI
> >>> this code isn't actually wrong.
> > 
> > from panfrost_lookup_bos(),you can see:
> > for (i = 0; i < job->bo_count; i++) {
> > struct panfrost_gem_mapping *mapping;
> > 
> > bo = to_panfrost_bo(job->bos[i]);
> > ICUBE_DEBUG_PRINTK("panfrost bo gem handle=0x%x
> > is_dumb=%d\n", bo->gem_handle, bo->is_dumb);
> > if (!bo->is_dumb) {
> >mapping = panfrost_gem_mapping_get(bo, priv);
> >if (!mapping) {
> > ret = -EINVAL;
> > break;
> >}
> > 
> > atomic_inc(>gpu_usecount);
> > job->mappings[i] = mapping;
> > } else {
> > atomic_inc(>gpu_usecount);
> > job->mappings[i] = NULL;
> > }
> > }
> 
> This code isn't upstream - in drm-misc/drm-misc-next (and all mainline
> kernels from what I can tell) this doesn't have any "is_dumb" test.
> Which branch are you using?
> 
> > if bo->is_dumb is TRUE,the job->mappings[i] will set to NULL,and the
> > while will be continue,so if job->mappings[i] is NULL,the following
> > can not be NULL.
> 
> I agree that with the above code the panfrost_job_cleanup() would need
> changing. But we don't (currently) have this code upstream, so this
> change doesn't make sense upstream.
> 
> Thanks,
> 
> Steve
> 
> > 3,
> > I've had this problem in our project,the value of is_dumb like
> > these: 0
> > 0
> > 0
> > 1
> > 0
> > 0
> > 0
> > so,when job->mappings[i] is NULL,we can not break the while in 
> > panfrost_job_cleanup().
> > 
> > thanks
> > Chunyou
> > 
> > ?? Fri, 18 Jun 2021 13:43:25 +0100
> > Steven Price  :
> > 
> >> On 17/06/2021 09:04, ChunyouTang wrote:
> >>> From: ChunyouTang 
> >>>
> >>> The 'break' can cause 'Memory manager not clean during takedown'
> >>>
> >>> It cannot use break to finish the circulation,it should use
> >>>
> >>> continue to traverse the circulation.it should put every mapping
> >>>
> >>> which is not NULL.
> >>
> >> You don't appear to have answered my question about whether you've
> >> actually seen this happen (and ideally what circumstances). In my
> >> previous email[1] I explained why I don't think this is needed. You
> >> need to convince me that I've overlooked something.
> >>
> >> Thanks,
> >>
> >> Steve
> >>
> >> [1]
> >> https://lore.kernel.org/r/31644881-134a-2d6e-dddf-e658a3a8176b%40arm.com
> >>
> >>> Signed-off-by: ChunyouTang 
> >>> ---
> >>>  drivers/gpu/drm/panfrost/panfrost_job.c | 2 +-
> >>>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
> >>> b/drivers/gpu/drm/panfrost/panfrost_job.c index
> >>> 6003cfeb1322..52bccc1d2d42 100644 ---
> >>> a/drivers/gpu/drm/panfrost/panfrost_job.c +++
> >>> b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -281,7 +281,7 @@
> >>> static void panfrost_job_cleanup(struct kref *ref) if
> >>> (job->mappings) { for (i = 0; i < job->bo_count; i++) {
> >>>   if (!job->mappings[i])
> >>> - break;
> >>> + continue;
> >>>  
> >>>   atomic_dec(>mappings[i]->obj->gpu_usecount);
> >>>   panfrost_gem_mapping_put(job->mappings[i]);
> >>>
> > 
> > 




Re: [PATCH v3 1/2] habanalabs: define uAPI to export FD for DMA-BUF

2021-06-21 Thread Jason Gunthorpe
On Mon, Jun 21, 2021 at 10:24:16PM +0300, Oded Gabbay wrote:

> Another thing I want to emphasize is that we are doing p2p only
> through the export/import of the FD. We do *not* allow the user to
> mmap the dma-buf as we do not support direct IO. So there is no access
> to these pages through the userspace.

Arguably mmaping the memory is a better choice, and is the direction
that Logan's series goes in. Here the use of DMABUF was specifically
designed to allow hitless revokation of the memory, which this isn't
even using.

So you are taking the hit of very limited hardware support and reduced
performance just to squeeze into DMABUF..

Jason


Re: [Freedreno] [PATCH 8/8] drm/msm/dsi: remove msm_dsi_dphy_timing from msm_dsi_phy

2021-06-21 Thread Dmitry Baryshkov
On Tue, 22 Jun 2021 at 01:44,  wrote:
>
> On 2021-05-15 06:12, Dmitry Baryshkov wrote:
> > Remove struct msm_dsi_dphy_timing field from the struct msm_dsi_phy.
> > There is no need to store them.
> >
> > Signed-off-by: Dmitry Baryshkov 
> > ---
> >  drivers/gpu/drm/msm/dsi/phy/dsi_phy.c  | 18 ++
> >  drivers/gpu/drm/msm/dsi/phy/dsi_phy.h  | 10 --
> >  drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c | 11 +++
> >  drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c | 11 +++
> >  drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c | 10 ++
> >  drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c | 12 
> >  .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c| 10 ++
> >  drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c  | 13 -
> >  8 files changed, 40 insertions(+), 55 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
> > b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
> > index 53a02c02dd6e..47145cab6b55 100644
> > --- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
> > +++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
> > @@ -453,6 +453,8 @@ int msm_dsi_dphy_timing_calc_v4(struct
> > msm_dsi_dphy_timing *timing,
> >   tmax = 255;
> >   timing->shared_timings.clk_pre = DIV_ROUND_UP((tmax - tmin) * 125,
> > 1) + tmin;
> >
> > + timing->bitclk_rate = bit_rate;
> > +
>
> I didnt follow this part of the change. agreed that 7nm PHY is using
> this but
> why do we need to start storing this in the timing node.
> Why cant we continue using it from msm_dsi_phy_clk_request?

As I wrote earlier
(https://lore.kernel.org/linux-arm-msm/71839b49-554c-fcc4-d110-0c8a49905...@linaro.org/),
I'd withdraw/ignore patch 8 for now, but the rest of the patchseries
is valid.

Thank you for your review. I'll repost the series w/o patch 8 and with
the dsi.yaml changes included.

>
> >   DBG("%d, %d, %d, %d, %d, %d, %d, %d, %d, %d",
> >   timing->shared_timings.clk_pre, 
> > timing->shared_timings.clk_post,
> >   timing->clk_zero, timing->clk_trail, timing->clk_prepare,
> > timing->hs_exit,
> > @@ -756,6 +758,7 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
> >   struct msm_dsi_phy_shared_timings *shared_timings)
> >  {
> >   struct device *dev = >pdev->dev;
> > + struct msm_dsi_dphy_timing timing;
> >   int ret;
> >
> >   if (!phy || !phy->cfg->ops.enable)
> > @@ -775,15 +778,22 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
> >   goto reg_en_fail;
> >   }
> >
> > - ret = phy->cfg->ops.enable(phy, clk_req);
> > + if (!phy->cfg->ops.dphy_timing_calc ||
> > + phy->cfg->ops.dphy_timing_calc(, clk_req)) {
> > + DRM_DEV_ERROR(>pdev->dev,
> > + "%s: D-PHY timing calculation failed\n", __func__);
> > + return -EINVAL;
> > + }
> > +
> > + memcpy(shared_timings, _timings,
> > +sizeof(*shared_timings));
> > +
> > + ret = phy->cfg->ops.enable(phy, );
> >   if (ret) {
> >   DRM_DEV_ERROR(dev, "%s: phy enable failed, %d\n", __func__, 
> > ret);
> >   goto phy_en_fail;
> >   }
> >
> > - memcpy(shared_timings, >timing.shared_timings,
> > -sizeof(*shared_timings));
> > -
> >   /*
> >* Resetting DSI PHY silently changes its PLL registers to reset
> > status,
> >* which will confuse clock driver and result in wrong output rate of
> > diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
> > b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
> > index 94a77ac364d3..9ba03a242d24 100644
> > --- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
> > +++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
> > @@ -17,10 +17,14 @@
> >  #define dsi_phy_write_udelay(offset, data, delay_us) {
> > msm_writel((data), (offset)); udelay(delay_us); }
> >  #define dsi_phy_write_ndelay(offset, data, delay_ns) {
> > msm_writel((data), (offset)); ndelay(delay_ns); }
> >
> > +struct msm_dsi_dphy_timing;
> > +
> >  struct msm_dsi_phy_ops {
> >   int (*pll_init)(struct msm_dsi_phy *phy);
> > - int (*enable)(struct msm_dsi_phy *phy,
> > + int (*dphy_timing_calc)(struct msm_dsi_dphy_timing *timing,
> >   struct msm_dsi_phy_clk_request *clk_req);
> > + int (*enable)(struct msm_dsi_phy *phy,
> > + struct msm_dsi_dphy_timing *timing);
> >   void (*disable)(struct msm_dsi_phy *phy);
> >   void (*save_pll_state)(struct msm_dsi_phy *phy);
> >   int (*restore_pll_state)(struct msm_dsi_phy *phy);
> > @@ -73,6 +77,9 @@ struct msm_dsi_dphy_timing {
> >   u32 hs_prep_dly_ckln;
> >   u8 hs_halfbyte_en;
> >   u8 hs_halfbyte_en_ckln;
> > +
> > + /* For PHY v4 only */
> > + unsigned long bitclk_rate;
> >  };
> >
> >  #define DSI_BYTE_PLL_CLK 0
> > @@ -90,7 +97,6 @@ struct msm_dsi_phy {
> >   struct clk *ahb_clk;
> >   struct regulator_bulk_data supplies[DSI_DEV_REGULATOR_MAX];
> >
> > - struct 

Re: [Freedreno] [PATCH 8/8] drm/msm/dsi: remove msm_dsi_dphy_timing from msm_dsi_phy

2021-06-21 Thread abhinavk

On 2021-05-15 06:12, Dmitry Baryshkov wrote:

Remove struct msm_dsi_dphy_timing field from the struct msm_dsi_phy.
There is no need to store them.

Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c  | 18 ++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h  | 10 --
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c | 11 +++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c | 11 +++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c | 10 ++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c | 12 
 .../gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c| 10 ++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c  | 13 -
 8 files changed, 40 insertions(+), 55 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index 53a02c02dd6e..47145cab6b55 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -453,6 +453,8 @@ int msm_dsi_dphy_timing_calc_v4(struct
msm_dsi_dphy_timing *timing,
tmax = 255;
timing->shared_timings.clk_pre = DIV_ROUND_UP((tmax - tmin) * 125,
1) + tmin;

+   timing->bitclk_rate = bit_rate;
+


I didnt follow this part of the change. agreed that 7nm PHY is using 
this but

why do we need to start storing this in the timing node.
Why cant we continue using it from msm_dsi_phy_clk_request?


DBG("%d, %d, %d, %d, %d, %d, %d, %d, %d, %d",
timing->shared_timings.clk_pre, timing->shared_timings.clk_post,
 		timing->clk_zero, timing->clk_trail, timing->clk_prepare, 
timing->hs_exit,

@@ -756,6 +758,7 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
struct msm_dsi_phy_shared_timings *shared_timings)
 {
struct device *dev = >pdev->dev;
+   struct msm_dsi_dphy_timing timing;
int ret;

if (!phy || !phy->cfg->ops.enable)
@@ -775,15 +778,22 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
goto reg_en_fail;
}

-   ret = phy->cfg->ops.enable(phy, clk_req);
+   if (!phy->cfg->ops.dphy_timing_calc ||
+   phy->cfg->ops.dphy_timing_calc(, clk_req)) {
+   DRM_DEV_ERROR(>pdev->dev,
+   "%s: D-PHY timing calculation failed\n", __func__);
+   return -EINVAL;
+   }
+
+   memcpy(shared_timings, _timings,
+  sizeof(*shared_timings));
+
+   ret = phy->cfg->ops.enable(phy, );
if (ret) {
DRM_DEV_ERROR(dev, "%s: phy enable failed, %d\n", __func__, 
ret);
goto phy_en_fail;
}

-   memcpy(shared_timings, >timing.shared_timings,
-  sizeof(*shared_timings));
-
/*
 	 * Resetting DSI PHY silently changes its PLL registers to reset 
status,

 * which will confuse clock driver and result in wrong output rate of
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
index 94a77ac364d3..9ba03a242d24 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
@@ -17,10 +17,14 @@
 #define dsi_phy_write_udelay(offset, data, delay_us) {
msm_writel((data), (offset)); udelay(delay_us); }
 #define dsi_phy_write_ndelay(offset, data, delay_ns) {
msm_writel((data), (offset)); ndelay(delay_ns); }

+struct msm_dsi_dphy_timing;
+
 struct msm_dsi_phy_ops {
int (*pll_init)(struct msm_dsi_phy *phy);
-   int (*enable)(struct msm_dsi_phy *phy,
+   int (*dphy_timing_calc)(struct msm_dsi_dphy_timing *timing,
struct msm_dsi_phy_clk_request *clk_req);
+   int (*enable)(struct msm_dsi_phy *phy,
+   struct msm_dsi_dphy_timing *timing);
void (*disable)(struct msm_dsi_phy *phy);
void (*save_pll_state)(struct msm_dsi_phy *phy);
int (*restore_pll_state)(struct msm_dsi_phy *phy);
@@ -73,6 +77,9 @@ struct msm_dsi_dphy_timing {
u32 hs_prep_dly_ckln;
u8 hs_halfbyte_en;
u8 hs_halfbyte_en_ckln;
+
+   /* For PHY v4 only */
+   unsigned long bitclk_rate;
 };

 #define DSI_BYTE_PLL_CLK   0
@@ -90,7 +97,6 @@ struct msm_dsi_phy {
struct clk *ahb_clk;
struct regulator_bulk_data supplies[DSI_DEV_REGULATOR_MAX];

-   struct msm_dsi_dphy_timing timing;
const struct msm_dsi_phy_cfg *cfg;

enum msm_dsi_phy_usecase usecase;
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
index 34bc93548fcf..bc838ee4f9b9 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
@@ -789,24 +789,17 @@ static void dsi_phy_hw_v3_0_lane_settings(struct
msm_dsi_phy *phy)
 }

 static int dsi_10nm_phy_enable(struct msm_dsi_phy *phy,
-  struct msm_dsi_phy_clk_request *clk_req)
+  struct msm_dsi_dphy_timing *timing)
 {
int ret;
u32 

Re: [Freedreno] [PATCH 7/8] drm/msm/dsi: drop msm_dsi_phy_get_shared_timings

2021-06-21 Thread abhinavk

On 2021-05-15 06:12, Dmitry Baryshkov wrote:

Instead of fetching shared timing through an extra function call, get
them directly from msm_dsi_phy_enable. This would allow removing phy
timings from the struct msm_dsi_phy in the next patch.

Signed-off-by: Dmitry Baryshkov 

Reviewed-by: Abhinav Kumar 

---
 drivers/gpu/drm/msm/dsi/dsi.h |  5 ++---
 drivers/gpu/drm/msm/dsi/dsi_manager.c |  3 +--
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c | 13 +
 3 files changed, 8 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi.h 
b/drivers/gpu/drm/msm/dsi/dsi.h

index 2041980548f0..84f9900ff878 100644
--- a/drivers/gpu/drm/msm/dsi/dsi.h
+++ b/drivers/gpu/drm/msm/dsi/dsi.h
@@ -163,10 +163,9 @@ struct msm_dsi_phy_clk_request {
 void msm_dsi_phy_driver_register(void);
 void msm_dsi_phy_driver_unregister(void);
 int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
-   struct msm_dsi_phy_clk_request *clk_req);
+   struct msm_dsi_phy_clk_request *clk_req,
+   struct msm_dsi_phy_shared_timings *shared_timings);
 void msm_dsi_phy_disable(struct msm_dsi_phy *phy);
-void msm_dsi_phy_get_shared_timings(struct msm_dsi_phy *phy,
-   struct msm_dsi_phy_shared_timings *shared_timing);
 void msm_dsi_phy_set_usecase(struct msm_dsi_phy *phy,
 enum msm_dsi_phy_usecase uc);
 void msm_dsi_phy_pll_save_state(struct msm_dsi_phy *phy);
diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c
b/drivers/gpu/drm/msm/dsi/dsi_manager.c
index 12efc8c69046..88d56a2bc8ab 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
@@ -118,8 +118,7 @@ static int enable_phy(struct msm_dsi *msm_dsi,

msm_dsi_host_get_phy_clk_req(msm_dsi->host, _req, is_dual_dsi);

-   ret = msm_dsi_phy_enable(msm_dsi->phy, _req);
-   msm_dsi_phy_get_shared_timings(msm_dsi->phy, shared_timings);
+   ret = msm_dsi_phy_enable(msm_dsi->phy, _req, shared_timings);

return ret;
 }
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index feaeb34b7071..53a02c02dd6e 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -752,7 +752,8 @@ void __exit msm_dsi_phy_driver_unregister(void)
 }

 int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
-   struct msm_dsi_phy_clk_request *clk_req)
+   struct msm_dsi_phy_clk_request *clk_req,
+   struct msm_dsi_phy_shared_timings *shared_timings)
 {
struct device *dev = >pdev->dev;
int ret;
@@ -780,6 +781,9 @@ int msm_dsi_phy_enable(struct msm_dsi_phy *phy,
goto phy_en_fail;
}

+   memcpy(shared_timings, >timing.shared_timings,
+  sizeof(*shared_timings));
+
/*
 	 * Resetting DSI PHY silently changes its PLL registers to reset 
status,

 * which will confuse clock driver and result in wrong output rate of
@@ -819,13 +823,6 @@ void msm_dsi_phy_disable(struct msm_dsi_phy *phy)
dsi_phy_disable_resource(phy);
 }

-void msm_dsi_phy_get_shared_timings(struct msm_dsi_phy *phy,
-   struct msm_dsi_phy_shared_timings *shared_timings)
-{
-   memcpy(shared_timings, >timing.shared_timings,
-  sizeof(*shared_timings));
-}
-
 void msm_dsi_phy_set_usecase(struct msm_dsi_phy *phy,
 enum msm_dsi_phy_usecase uc)
 {


Re: [Freedreno] [PATCH 6/8] drm/msm/dsi: phy: use of_device_get_match_data

2021-06-21 Thread abhinavk

On 2021-05-15 06:12, Dmitry Baryshkov wrote:

Use of_device_get_match-data() instead of of_match_node().

Signed-off-by: Dmitry Baryshkov 

Reviewed-by: Abhinav Kumar 

---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index f2b5e0f63a16..feaeb34b7071 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -625,17 +625,12 @@ static int dsi_phy_driver_probe(struct
platform_device *pdev)
 {
struct msm_dsi_phy *phy;
struct device *dev = >dev;
-   const struct of_device_id *match;
int ret;

phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL);
if (!phy)
return -ENOMEM;

-   match = of_match_node(dsi_phy_dt_match, dev->of_node);
-   if (!match)
-   return -ENODEV;
-
phy->provided_clocks = devm_kzalloc(dev,
struct_size(phy->provided_clocks, hws, 
NUM_PROVIDED_CLKS),
GFP_KERNEL);
@@ -644,7 +639,10 @@ static int dsi_phy_driver_probe(struct
platform_device *pdev)

phy->provided_clocks->num = NUM_PROVIDED_CLKS;

-   phy->cfg = match->data;
+   phy->cfg = of_device_get_match_data(>dev);
+   if (!phy->cfg)
+   return -ENODEV;
+
phy->pdev = pdev;

phy->id = dsi_phy_get_id(phy);


Re: [Freedreno] [PATCH 5/8] drm/msm/dsi: stop setting clock parents manually

2021-06-21 Thread abhinavk

On 2021-05-15 06:12, Dmitry Baryshkov wrote:

There is no reason to set clock parents manually, use device tree to
assign DSI/display clock parents to DSI PHY clocks. Dropping this 
manual

setup allows us to drop repeating code and to move registration of hw
clock providers to generic place.

Signed-off-by: Dmitry Baryshkov 
Once you have documented or pointed me to the documentation that 
assign-clock-parents
is now a mandatory property for the DSI node, this is a good cleanup, 
hence:


Reviewed-by: Abhinav Kumar 


---
 drivers/gpu/drm/msm/dsi/dsi.h |  2 --
 drivers/gpu/drm/msm/dsi/dsi_host.c| 51 ---
 drivers/gpu/drm/msm/dsi/dsi_manager.c |  5 ---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c | 11 --
 4 files changed, 69 deletions(-)

diff --git a/drivers/gpu/drm/msm/dsi/dsi.h 
b/drivers/gpu/drm/msm/dsi/dsi.h

index 7abfeab08165..2041980548f0 100644
--- a/drivers/gpu/drm/msm/dsi/dsi.h
+++ b/drivers/gpu/drm/msm/dsi/dsi.h
@@ -169,8 +169,6 @@ void msm_dsi_phy_get_shared_timings(struct 
msm_dsi_phy *phy,

struct msm_dsi_phy_shared_timings *shared_timing);
 void msm_dsi_phy_set_usecase(struct msm_dsi_phy *phy,
 enum msm_dsi_phy_usecase uc);
-int msm_dsi_phy_get_clk_provider(struct msm_dsi_phy *phy,
-   struct clk **byte_clk_provider, struct clk **pixel_clk_provider);
 void msm_dsi_phy_pll_save_state(struct msm_dsi_phy *phy);
 int msm_dsi_phy_pll_restore_state(struct msm_dsi_phy *phy);

diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c
b/drivers/gpu/drm/msm/dsi/dsi_host.c
index 8a10e4343281..1f444101e551 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -2223,57 +2223,6 @@ void msm_dsi_host_cmd_xfer_commit(struct
mipi_dsi_host *host, u32 dma_base,
wmb();
 }

-int msm_dsi_host_set_src_pll(struct mipi_dsi_host *host,
-   struct msm_dsi_phy *src_phy)
-{
-   struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
-   struct clk *byte_clk_provider, *pixel_clk_provider;
-   int ret;
-
-   ret = msm_dsi_phy_get_clk_provider(src_phy,
-   _clk_provider, _clk_provider);
-   if (ret) {
-   pr_info("%s: can't get provider from pll, don't set parent\n",
-   __func__);
-   return 0;
-   }
-
-   ret = clk_set_parent(msm_host->byte_clk_src, byte_clk_provider);
-   if (ret) {
-   pr_err("%s: can't set parent to byte_clk_src. ret=%d\n",
-   __func__, ret);
-   goto exit;
-   }
-
-   ret = clk_set_parent(msm_host->pixel_clk_src, pixel_clk_provider);
-   if (ret) {
-   pr_err("%s: can't set parent to pixel_clk_src. ret=%d\n",
-   __func__, ret);
-   goto exit;
-   }
-
-   if (msm_host->dsi_clk_src) {
-   ret = clk_set_parent(msm_host->dsi_clk_src, pixel_clk_provider);
-   if (ret) {
-   pr_err("%s: can't set parent to dsi_clk_src. ret=%d\n",
-   __func__, ret);
-   goto exit;
-   }
-   }
-
-   if (msm_host->esc_clk_src) {
-   ret = clk_set_parent(msm_host->esc_clk_src, byte_clk_provider);
-   if (ret) {
-   pr_err("%s: can't set parent to esc_clk_src. ret=%d\n",
-   __func__, ret);
-   goto exit;
-   }
-   }
-
-exit:
-   return ret;
-}
-
 void msm_dsi_host_reset_phy(struct mipi_dsi_host *host)
 {
struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c
b/drivers/gpu/drm/msm/dsi/dsi_manager.c
index cd016576e8c5..12efc8c69046 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
@@ -78,7 +78,6 @@ static int dsi_mgr_setup_components(int id)
return ret;

msm_dsi_phy_set_usecase(msm_dsi->phy, MSM_DSI_PHY_STANDALONE);
-   ret = msm_dsi_host_set_src_pll(msm_dsi->host, msm_dsi->phy);
} else if (!other_dsi) {
ret = 0;
} else {
@@ -105,10 +104,6 @@ static int dsi_mgr_setup_components(int id)
MSM_DSI_PHY_MASTER);
msm_dsi_phy_set_usecase(clk_slave_dsi->phy,
MSM_DSI_PHY_SLAVE);
-   ret = msm_dsi_host_set_src_pll(msm_dsi->host, 
clk_master_dsi->phy);
-   if (ret)
-   return ret;
-		ret = msm_dsi_host_set_src_pll(other_dsi->host, 
clk_master_dsi->phy);

}

return ret;
diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
index ff7f2ec42030..f2b5e0f63a16 100644
--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
@@ -835,17 +835,6 @@ void msm_dsi_phy_set_usecase(struct 

Re: [PATCH 4/8] arm64: dts: qcom: sm8250: assign DSI clock source parents

2021-06-21 Thread abhinavk

On 2021-05-15 06:12, Dmitry Baryshkov wrote:

Assign DSI clock source parents to DSI PHY clocks.

Signed-off-by: Dmitry Baryshkov 

Can you please confirm if you have validated dual DSI with this change?
With that,
Reviewed-by: Abhinav Kumar 

---
 arch/arm64/boot/dts/qcom/sm8250.dtsi | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi
b/arch/arm64/boot/dts/qcom/sm8250.dtsi
index 947e1accae3a..b6ed94497e8a 100644
--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
@@ -2445,6 +2445,9 @@ dsi0: dsi@ae94000 {
  "iface",
  "bus";

+   assigned-clocks = < 
DISP_CC_MDSS_BYTE0_CLK_SRC>, <
DISP_CC_MDSS_PCLK0_CLK_SRC>;
+   assigned-clock-parents = <_phy 0>, <_phy 
1>;
+
operating-points-v2 = <_opp_table>;
power-domains = < SM8250_MMCX>;

@@ -2512,6 +2515,9 @@ dsi1: dsi@ae96000 {
  "iface",
  "bus";

+   assigned-clocks = < 
DISP_CC_MDSS_BYTE1_CLK_SRC>, <
DISP_CC_MDSS_PCLK1_CLK_SRC>;
+   assigned-clock-parents = <_phy 0>, <_phy 
1>;
+
operating-points-v2 = <_opp_table>;
power-domains = < SM8250_MMCX>;


Re: [Freedreno] [PATCH 3/8] arm64: dts: qcom: sdm845-mtp: assign DSI clock source parents

2021-06-21 Thread abhinavk

On 2021-05-15 06:12, Dmitry Baryshkov wrote:

Assign DSI clock source parents to DSI PHY clocks.

Signed-off-by: Dmitry Baryshkov 

Reviewed-by: Abhinav Kumar 

---
 arch/arm64/boot/dts/qcom/sdm845-mtp.dts | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sdm845-mtp.dts
b/arch/arm64/boot/dts/qcom/sdm845-mtp.dts
index 1372fe8601f5..9e550e3ad678 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-mtp.dts
+++ b/arch/arm64/boot/dts/qcom/sdm845-mtp.dts
@@ -413,6 +413,9 @@  {

qcom,dual-dsi-mode;

+   /* DSI1 is slave, so use DSI0 clocks */
+   assigned-clock-parents = <_phy 0>, <_phy 1>;
+
ports {
port@1 {
endpoint {


Re: [Freedreno] [PATCH 2/8] arm64: dts: qcom: sdm845: assign DSI clock source parents

2021-06-21 Thread abhinavk

On 2021-05-15 06:12, Dmitry Baryshkov wrote:

Assign DSI clock source parents to DSI PHY clocks.

Signed-off-by: Dmitry Baryshkov 

Reviewed-by: Abhinav Kumar 

---
 arch/arm64/boot/dts/qcom/sdm845.dtsi | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi
b/arch/arm64/boot/dts/qcom/sdm845.dtsi
index 454f794af547..2166549382c1 100644
--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
@@ -4113,6 +4113,9 @@ dsi0: dsi@ae94000 {
  "core",
  "iface",
  "bus";
+   assigned-clocks = < 
DISP_CC_MDSS_BYTE0_CLK_SRC>, <
DISP_CC_MDSS_PCLK0_CLK_SRC>;
+   assigned-clock-parents = <_phy 0>, <_phy 
1>;
+
operating-points-v2 = <_opp_table>;
power-domains = < SDM845_CX>;

@@ -4179,6 +4182,9 @@ dsi1: dsi@ae96000 {
  "core",
  "iface",
  "bus";
+   assigned-clocks = < 
DISP_CC_MDSS_BYTE1_CLK_SRC>, <
DISP_CC_MDSS_PCLK1_CLK_SRC>;
+   assigned-clock-parents = <_phy 0>, <_phy 
1>;
+
operating-points-v2 = <_opp_table>;
power-domains = < SDM845_CX>;


Re: [Freedreno] [PATCH 1/8] arm64: dts: qcom: sc7180: assign DSI clock source parents

2021-06-21 Thread abhinavk

On 2021-05-15 06:12, Dmitry Baryshkov wrote:

Assign DSI clock source parents to DSI PHY clocks.

Signed-off-by: Dmitry Baryshkov 

Reviewed-by: Abhinav Kumar 

---
 arch/arm64/boot/dts/qcom/sc7180.dtsi | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi
b/arch/arm64/boot/dts/qcom/sc7180.dtsi
index 1ea3344ab62c..4e8708cce1cc 100644
--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
@@ -3090,6 +3090,9 @@ dsi0: dsi@ae94000 {
  "iface",
  "bus";

+   assigned-clocks = < 
DISP_CC_MDSS_BYTE0_CLK_SRC>, <
DISP_CC_MDSS_PCLK0_CLK_SRC>;
+   assigned-clock-parents = <_phy 0>, <_phy 
1>;
+
operating-points-v2 = <_opp_table>;
power-domains = < SC7180_CX>;


Re: [Freedreno] [PATCH 0/8] dsi: rework clock parents and timing handling

2021-06-21 Thread abhinavk

On 2021-05-15 06:12, Dmitry Baryshkov wrote:
This patch series brings back several patches targeting assigning 
dispcc

clock parents, that were removed from the massive dsi rework patchset
earlier.

Few notes:
 - assign-clock-parents is a mandatory proprety according to the 
current

   dsi.txt description.


Is this comment still right? dsi.txt has now moved to YAML format, but 
even before
that I am not able to see that this was a mandatory property. With these 
changes yes,
it becomes a mandatory property and hence needs to be documented that 
way.


 - There is little point in duplicating this functionality with the 
ad-hoc

   implementation in the dsi code.

On top of that come few minor cleanups for the DSI PHY drivers.

I'd kindly ask to bring all dts changes also through the drm tree, so
that there won't be any breakage of the functionality.


The following changes since commit 
f2f46b878777e0d3f885c7ddad48f477b4dea247:


  drm/msm/dp: initialize audio_comp when audio starts (2021-05-06
16:26:57 -0700)

are available in the Git repository at:

  https://git.linaro.org/people/dmitry.baryshkov/kernel.git 
dsi-phy-update


for you to fetch changes up to 
f1fd3b113cbb98febad682fc11ea1c6e717434c2:


  drm/msm/dsi: remove msm_dsi_dphy_timing from msm_dsi_phy (2021-05-14
22:55:11 +0300)


Dmitry Baryshkov (8):
  arm64: dts: qcom: sc7180: assign DSI clock source parents
  arm64: dts: qcom: sdm845: assign DSI clock source parents
  arm64: dts: qcom: sdm845-mtp: assign DSI clock source parents
  arm64: dts: qcom: sm8250: assign DSI clock source parents
  drm/msm/dsi: stop setting clock parents manually
  drm/msm/dsi: phy: use of_device_get_match_data
  drm/msm/dsi: drop msm_dsi_phy_get_shared_timings
  drm/msm/dsi: remove msm_dsi_dphy_timing from msm_dsi_phy

 arch/arm64/boot/dts/qcom/sc7180.dtsi|  3 ++
 arch/arm64/boot/dts/qcom/sdm845-mtp.dts |  3 ++
 arch/arm64/boot/dts/qcom/sdm845.dtsi|  6 +++
 arch/arm64/boot/dts/qcom/sm8250.dtsi|  6 +++
 drivers/gpu/drm/msm/dsi/dsi.h   |  7 +---
 drivers/gpu/drm/msm/dsi/dsi_host.c  | 51 
-

 drivers/gpu/drm/msm/dsi/dsi_manager.c   |  8 +---
 drivers/gpu/drm/msm/dsi/phy/dsi_phy.c   | 46 
++

 drivers/gpu/drm/msm/dsi/phy/dsi_phy.h   | 10 -
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c  | 11 ++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c  | 11 ++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c  | 10 +
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c  | 12 ++
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c | 10 +
 drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c   | 13 ++-
 15 files changed, 67 insertions(+), 140 deletions(-)


___
Freedreno mailing list
freedr...@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [bugzilla-dae...@bugzilla.kernel.org: [Bug 213519] New: WARNING on system reboot in: drivers/gpu/drm/i915/intel_runtime_pm.c:635 intel_runtime_pm_driver_release]

2021-06-21 Thread Bjorn Helgaas
[+cc Joel (reporter)]

On Mon, Jun 21, 2021 at 04:50:14PM -0500, Bjorn Helgaas wrote:
> - Forwarded message from bugzilla-dae...@bugzilla.kernel.org -
> 
> Date: Mon, 21 Jun 2021 02:50:09 +
> From: bugzilla-dae...@bugzilla.kernel.org
> To: bj...@helgaas.com
> Subject: [Bug 213519] New: WARNING on system reboot in:
>   drivers/gpu/drm/i915/intel_runtime_pm.c:635 
> intel_runtime_pm_driver_release
> Message-ID: 
> 
> https://bugzilla.kernel.org/show_bug.cgi?id=213519
> 
> Bug ID: 213519
>Summary: WARNING on system reboot in:
> drivers/gpu/drm/i915/intel_runtime_pm.c:635
> intel_runtime_pm_driver_release
>Product: Drivers
>Version: 2.5
> Kernel Version: 5.12.12
>   Hardware: x86-64
> OS: Linux
>   Tree: Mainline
> Status: NEW
>   Severity: normal
>   Priority: P1
>  Component: PCI
>   Assignee: drivers_...@kernel-bugs.osdl.org
>   Reporter: j-c...@westvi.com
> Regression: No
> 
> Created attachment 297517
>   --> https://bugzilla.kernel.org/attachment.cgi?id=297517=edit
> Contents of 'warning' stack trace, etc.
> 
> As mentioned in summary - warning message in this routine at system reboot. 
> Try
> as I might, I cannot include the text of the warning directly here in the
> description without losing carriage returns, so I include it as a text
> attachment.
> 
> - End forwarded message -
> 
> [Attachment contents below]
> 
> [  239.019148] [ cut here ]
> [  239.024226] i915 :00:02.0: i915 raw-wakerefs=1 wakelocks=1 on cleanup
> [  239.031561] WARNING: CPU: 4 PID: 2484 at 
> drivers/gpu/drm/i915/intel_runtime_pm.c:635 
> intel_runtime_pm_driver_release+0x4f/0x60
> [  239.043974] Modules linked in: mei_wdt x86_pkg_temp_thermal 
> ghash_clmulni_intel mei_me mei cryptd
> [  239.053656] CPU: 4 PID: 2484 Comm: reboot Not tainted 5.12.12 #1
> [  239.060236] Hardware name: To Be Filled By O.E.M. To Be Filled By 
> O.E.M./NUC-8665UE, BIOS P1.50 06/04/2021
> [  239.070766] RIP: 0010:intel_runtime_pm_driver_release+0x4f/0x60
> [  239.077256] Code: 10 4c 8b 6f 50 4d 85 ed 75 03 4c 8b 2f e8 59 8f 11 00 41 
> 89 d8 44 89 e1 4c 89 ea 48 89 c6 48 c7 c7 f8 25 7d b0 e8 06 e8 67 00 <0f> 0b 
> 5b 41 5c 41 5d 5d c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 48
> [  239.097700] RSP: 0018:b8c682f3bd30 EFLAGS: 00010286
> [  239.103422] RAX:  RBX: 0001 RCX: 
> b0af01e8
> [  239.85] RDX:  RSI: dfff RDI: 
> b0a401e0
> [  239.118850] RBP: b8c682f3bd48 R08:  R09: 
> b8c682f3bb08
> [  239.126617] R10: b8c682f3bb00 R11: b0b20228 R12: 
> 0001
> [  239.134390] R13: 978680d114b0 R14: 97868197eae8 R15: 
> fee1dead
> [  239.142203] FS:  7f741a182580() GS:9789dc50() 
> knlGS:
> [  239.151044] CS:  0010 DS:  ES:  CR0: 80050033
> [  239.157318] CR2: 0169f4c8 CR3: 00019cf14003 CR4: 
> 003706e0
> [  239.165098] DR0:  DR1:  DR2: 
> 
> [  239.172874] DR3:  DR6: fffe0ff0 DR7: 
> 0400
> [  239.180658] Call Trace:
> [  239.183346]  i915_driver_shutdown+0xcf/0xe0
> [  239.187920]  i915_pci_shutdown+0x10/0x20
> [  239.192181]  pci_device_shutdown+0x35/0x60
> [  239.196629]  device_shutdown+0x156/0x1b0
> [  239.200827]  __do_sys_reboot.cold+0x2f/0x5b
> [  239.205410]  __x64_sys_reboot+0x16/0x20
> [  239.209586]  do_syscall_64+0x38/0x50
> [  239.213399]  entry_SYSCALL_64_after_hwframe+0x44/0xae
> [  239.218837] RIP: 0033:0x7f741a0a9bc3
> [  239.222740] Code: 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 
> 1f 44 00 00 89 fa be 69 19 12 28 bf ad de e1 fe b8 a9 00 00 00 0f 05 <48> 3d 
> 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 71 c2 0c 00 f7 d8
> [  239.243228] RSP: 002b:7ffcc2a16488 EFLAGS: 0206 ORIG_RAX: 
> 00a9
> [  239.251503] RAX: ffda RBX: 7ffcc2a165d8 RCX: 
> 7f741a0a9bc3
> [  239.259304] RDX: 01234567 RSI: 28121969 RDI: 
> fee1dead
> [  239.267105] RBP: 0004 R08:  R09: 
> 0169e2e0
> [  239.274926] R10: fd06 R11: 0206 R12: 
> 
> [  239.282719] R13: 0001 R14:  R15: 
> 0001
> [  239.290433] ---[ end trace cd9d07db38ec6618 ]---
> 


[bugzilla-dae...@bugzilla.kernel.org: [Bug 213519] New: WARNING on system reboot in: drivers/gpu/drm/i915/intel_runtime_pm.c:635 intel_runtime_pm_driver_release]

2021-06-21 Thread Bjorn Helgaas
- Forwarded message from bugzilla-dae...@bugzilla.kernel.org -

Date: Mon, 21 Jun 2021 02:50:09 +
From: bugzilla-dae...@bugzilla.kernel.org
To: bj...@helgaas.com
Subject: [Bug 213519] New: WARNING on system reboot in:
drivers/gpu/drm/i915/intel_runtime_pm.c:635 
intel_runtime_pm_driver_release
Message-ID: 

https://bugzilla.kernel.org/show_bug.cgi?id=213519

Bug ID: 213519
   Summary: WARNING on system reboot in:
drivers/gpu/drm/i915/intel_runtime_pm.c:635
intel_runtime_pm_driver_release
   Product: Drivers
   Version: 2.5
Kernel Version: 5.12.12
  Hardware: x86-64
OS: Linux
  Tree: Mainline
Status: NEW
  Severity: normal
  Priority: P1
 Component: PCI
  Assignee: drivers_...@kernel-bugs.osdl.org
  Reporter: j-c...@westvi.com
Regression: No

Created attachment 297517
  --> https://bugzilla.kernel.org/attachment.cgi?id=297517=edit
Contents of 'warning' stack trace, etc.

As mentioned in summary - warning message in this routine at system reboot. Try
as I might, I cannot include the text of the warning directly here in the
description without losing carriage returns, so I include it as a text
attachment.

- End forwarded message -

[Attachment contents below]

[  239.019148] [ cut here ]
[  239.024226] i915 :00:02.0: i915 raw-wakerefs=1 wakelocks=1 on cleanup
[  239.031561] WARNING: CPU: 4 PID: 2484 at 
drivers/gpu/drm/i915/intel_runtime_pm.c:635 
intel_runtime_pm_driver_release+0x4f/0x60
[  239.043974] Modules linked in: mei_wdt x86_pkg_temp_thermal 
ghash_clmulni_intel mei_me mei cryptd
[  239.053656] CPU: 4 PID: 2484 Comm: reboot Not tainted 5.12.12 #1
[  239.060236] Hardware name: To Be Filled By O.E.M. To Be Filled By 
O.E.M./NUC-8665UE, BIOS P1.50 06/04/2021
[  239.070766] RIP: 0010:intel_runtime_pm_driver_release+0x4f/0x60
[  239.077256] Code: 10 4c 8b 6f 50 4d 85 ed 75 03 4c 8b 2f e8 59 8f 11 00 41 
89 d8 44 89 e1 4c 89 ea 48 89 c6 48 c7 c7 f8 25 7d b0 e8 06 e8 67 00 <0f> 0b 5b 
41 5c 41 5d 5d c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 48
[  239.097700] RSP: 0018:b8c682f3bd30 EFLAGS: 00010286
[  239.103422] RAX:  RBX: 0001 RCX: b0af01e8
[  239.85] RDX:  RSI: dfff RDI: b0a401e0
[  239.118850] RBP: b8c682f3bd48 R08:  R09: b8c682f3bb08
[  239.126617] R10: b8c682f3bb00 R11: b0b20228 R12: 0001
[  239.134390] R13: 978680d114b0 R14: 97868197eae8 R15: fee1dead
[  239.142203] FS:  7f741a182580() GS:9789dc50() 
knlGS:
[  239.151044] CS:  0010 DS:  ES:  CR0: 80050033
[  239.157318] CR2: 0169f4c8 CR3: 00019cf14003 CR4: 003706e0
[  239.165098] DR0:  DR1:  DR2: 
[  239.172874] DR3:  DR6: fffe0ff0 DR7: 0400
[  239.180658] Call Trace:
[  239.183346]  i915_driver_shutdown+0xcf/0xe0
[  239.187920]  i915_pci_shutdown+0x10/0x20
[  239.192181]  pci_device_shutdown+0x35/0x60
[  239.196629]  device_shutdown+0x156/0x1b0
[  239.200827]  __do_sys_reboot.cold+0x2f/0x5b
[  239.205410]  __x64_sys_reboot+0x16/0x20
[  239.209586]  do_syscall_64+0x38/0x50
[  239.213399]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  239.218837] RIP: 0033:0x7f741a0a9bc3
[  239.222740] Code: 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 
1f 44 00 00 89 fa be 69 19 12 28 bf ad de e1 fe b8 a9 00 00 00 0f 05 <48> 3d 00 
f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 71 c2 0c 00 f7 d8
[  239.243228] RSP: 002b:7ffcc2a16488 EFLAGS: 0206 ORIG_RAX: 
00a9
[  239.251503] RAX: ffda RBX: 7ffcc2a165d8 RCX: 7f741a0a9bc3
[  239.259304] RDX: 01234567 RSI: 28121969 RDI: fee1dead
[  239.267105] RBP: 0004 R08:  R09: 0169e2e0
[  239.274926] R10: fd06 R11: 0206 R12: 
[  239.282719] R13: 0001 R14:  R15: 0001
[  239.290433] ---[ end trace cd9d07db38ec6618 ]---



[pull] amdgpu drm-fixes-5.13

2021-06-21 Thread Alex Deucher
Hi Dave, Daniel,

Last minute fixes for 5.13.

The following changes since commit 13311e74253fe64329390df80bed3f07314ddd61:

  Linux 5.13-rc7 (2021-06-20 15:03:15 -0700)

are available in the Git repository at:

  https://gitlab.freedesktop.org/agd5f/linux.git 
tags/amd-drm-fixes-5.13-2021-06-21

for you to fetch changes up to ee5468b9f1d3bf48082eed351dace14598e8ca39:

  Revert "drm/amdgpu/gfx9: fix the doorbell missing when in CGPG issue." 
(2021-06-21 17:22:52 -0400)


amd-drm-fixes-5.13-2021-06-21:

amdgpu:
- Revert GFX9, 10 doorbell fixes, we just
  end up trading one bug for another
- Potential memory corruption fix in framebuffer handling


Michel Dänzer (1):
  drm/amdgpu: Call drm_framebuffer_init last for framebuffer init

Yifan Zhang (2):
  Revert "drm/amdgpu/gfx10: enlarge CP_MEC_DOORBELL_RANGE_UPPER to cover 
full doorbell."
  Revert "drm/amdgpu/gfx9: fix the doorbell missing when in CGPG issue."

 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 12 +++-
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c  |  6 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c   |  6 +-
 3 files changed, 9 insertions(+), 15 deletions(-)


Re: [PATCH 3/3] Replace for_each_*_bit_from() with for_each_*_bit() where appropriate

2021-06-21 Thread Yury Norov
On Mon, Jun 21, 2021 at 01:17:11PM -0700, Guenter Roeck wrote:
> On Fri, Jun 18, 2021 at 12:57:35PM -0700, Yury Norov wrote:
> > A couple of kernel functions call for_each_*_bit_from() with start
> > bit equal to 0. Replace them with for_each_*_bit().
> > 
> > No functional changes, but might improve on readability.
> > 
> > Signed-off-by: Yury Norov 
> > ---
> >  arch/x86/kernel/apic/vector.c | 4 ++--
> >  drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 4 ++--
> >  drivers/hwmon/ltc2992.c   | 3 +--
> 
> This should be three different patches, one per subsystem.

It was discussed recently.
https://lore.kernel.org/linux-arch/20210614180706.1e8564854bfed648dd4c0...@linux-foundation.org/


Re: [PATCH 2/2] drm: rcar-du: Shutdown the display on remove

2021-06-21 Thread Kieran Bingham
Hi Laurent,

On 23/03/2021 00:56, Laurent Pinchart wrote:
> When the device is unbound from the driver (the DU being a platform
> device, this occurs either when removing the DU module, or when
> unbinding the device manually through sysfs), the display may be active.
> Make sure it gets shut down.

I bet this may be particularly true if there's a console on it.


> Signed-off-by: Laurent Pinchart 
> ---
>  drivers/gpu/drm/rcar-du/rcar_du_drv.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.c 
> b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
> index 2a06ec1cbefb..9f1a3aad4dd7 100644
> --- a/drivers/gpu/drm/rcar-du/rcar_du_drv.c
> +++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
> @@ -553,6 +553,7 @@ static int rcar_du_remove(struct platform_device *pdev)
>   struct drm_device *ddev = >ddev;
>  
>   drm_dev_unregister(ddev);
> + drm_atomic_helper_shutdown(ddev);
>  
>   drm_kms_helper_poll_fini(ddev);

There's a real mix of other drivers either calling
drm_kms_helper_poll_fini() before drm_atomic_helper_shutdown() or after,
so I'll assume that the sequencing here isn't terribly important (I hope).

So,

Reviewed-by: Kieran Bingham 


Re: [PATCH 1/2] drm: rcar-du: Don't put reference to drm_device in rcar_du_remove()

2021-06-21 Thread Kieran Bingham
Hi Laurent,

On 23/03/2021 00:56, Laurent Pinchart wrote:
> The reference to the drm_device that was acquired by
> devm_drm_dev_alloc() is released automatically by the devres
> infrastructure. It must not be released manually, as that causes a
> reference underflow..
> 

Ouch. We need some tests on module load and unload somewhere...

I'm getting closer with infrastructure ...


> Fixes: ea6aae151887 ("drm: rcar-du: Embed drm_device in rcar_du_device")
> Signed-off-by: Laurent Pinchart 

Reviewed-by: Kieran Bingham 


> ---
>  drivers/gpu/drm/rcar-du/rcar_du_drv.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.c 
> b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
> index 43de3d8686e8..2a06ec1cbefb 100644
> --- a/drivers/gpu/drm/rcar-du/rcar_du_drv.c
> +++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
> @@ -556,8 +556,6 @@ static int rcar_du_remove(struct platform_device *pdev)
>  
>   drm_kms_helper_poll_fini(ddev);
>  
> - drm_dev_put(ddev);
> -
>   return 0;
>  }
>  
> 


Re: [PATCH 3/3] Replace for_each_*_bit_from() with for_each_*_bit() where appropriate

2021-06-21 Thread Guenter Roeck
On Fri, Jun 18, 2021 at 12:57:35PM -0700, Yury Norov wrote:
> A couple of kernel functions call for_each_*_bit_from() with start
> bit equal to 0. Replace them with for_each_*_bit().
> 
> No functional changes, but might improve on readability.
> 
> Signed-off-by: Yury Norov 
> ---
>  arch/x86/kernel/apic/vector.c | 4 ++--
>  drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 4 ++--
>  drivers/hwmon/ltc2992.c   | 3 +--

This should be three different patches, one per subsystem.

Guenter

>  3 files changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
> index fb67ed5e7e6a..d099ef226f55 100644
> --- a/arch/x86/kernel/apic/vector.c
> +++ b/arch/x86/kernel/apic/vector.c
> @@ -760,9 +760,9 @@ void __init lapic_update_legacy_vectors(void)
>  
>  void __init lapic_assign_system_vectors(void)
>  {
> - unsigned int i, vector = 0;
> + unsigned int i, vector;
>  
> - for_each_set_bit_from(vector, system_vectors, NR_VECTORS)
> + for_each_set_bit(vector, system_vectors, NR_VECTORS)
>   irq_matrix_assign_system(vector_matrix, vector, false);
>  
>   if (nr_legacy_irqs() > 1)
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c 
> b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> index 4102bcea3341..42ce3287d3be 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> @@ -1032,7 +1032,7 @@ int etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct 
> seq_file *m)
>  
>  void etnaviv_gpu_recover_hang(struct etnaviv_gpu *gpu)
>  {
> - unsigned int i = 0;
> + unsigned int i;
>  
>   dev_err(gpu->dev, "recover hung GPU!\n");
>  
> @@ -1045,7 +1045,7 @@ void etnaviv_gpu_recover_hang(struct etnaviv_gpu *gpu)
>  
>   /* complete all events, the GPU won't do it after the reset */
>   spin_lock(>event_spinlock);
> - for_each_set_bit_from(i, gpu->event_bitmap, ETNA_NR_EVENTS)
> + for_each_set_bit(i, gpu->event_bitmap, ETNA_NR_EVENTS)
>   complete(>event_free);
>   bitmap_zero(gpu->event_bitmap, ETNA_NR_EVENTS);
>   spin_unlock(>event_spinlock);
> diff --git a/drivers/hwmon/ltc2992.c b/drivers/hwmon/ltc2992.c
> index 2a4bed0ab226..7352d2b3c756 100644
> --- a/drivers/hwmon/ltc2992.c
> +++ b/drivers/hwmon/ltc2992.c
> @@ -248,8 +248,7 @@ static int ltc2992_gpio_get_multiple(struct gpio_chip 
> *chip, unsigned long *mask
>  
>   gpio_status = reg;
>  
> - gpio_nr = 0;
> - for_each_set_bit_from(gpio_nr, mask, LTC2992_GPIO_NR) {
> + for_each_set_bit(gpio_nr, mask, LTC2992_GPIO_NR) {
>   if (test_bit(LTC2992_GPIO_BIT(gpio_nr), _status))
>   set_bit(gpio_nr, bits);
>   }


Re: [PATCH] drm: rcar-du: Shutdown the display on system shutdown

2021-06-21 Thread Kieran Bingham
Hi Laurent,

On 23/03/2021 00:12, Laurent Pinchart wrote:
> When the system shuts down or warm reboots, the display may be active,
> with the hardware accessing system memory. Upon reboot, the DDR will not
> be accessible, which may cause issues.

Troublesome indeed.

> Implement the platform_driver .shutdown() operation and shut down the
> display to fix this.
> 
> Signed-off-by: Laurent Pinchart 

Looking in drm_atomic_helper.c, I saw reference to
drm_atomic_helper_shutdown() also being used at driver unload ... so I
was going to ask about that - until I saw "Shutdown the display on
remove" which is in the next 2 patches of my review queue ;-)

Reviewed-by: Kieran Bingham 



> ---
>  drivers/gpu/drm/rcar-du/rcar_du_drv.c | 8 
>  1 file changed, 8 insertions(+)
> 
> diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.c 
> b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
> index bfbff90588cb..43de3d8686e8 100644
> --- a/drivers/gpu/drm/rcar-du/rcar_du_drv.c
> +++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
> @@ -561,6 +561,13 @@ static int rcar_du_remove(struct platform_device *pdev)
>   return 0;
>  }
>  
> +static void rcar_du_shutdown(struct platform_device *pdev)
> +{
> + struct rcar_du_device *rcdu = platform_get_drvdata(pdev);
> +
> + drm_atomic_helper_shutdown(>ddev);
> +}
> +
>  static int rcar_du_probe(struct platform_device *pdev)
>  {
>   struct rcar_du_device *rcdu;
> @@ -617,6 +624,7 @@ static int rcar_du_probe(struct platform_device *pdev)
>  static struct platform_driver rcar_du_platform_driver = {
>   .probe  = rcar_du_probe,
>   .remove = rcar_du_remove,
> + .shutdown   = rcar_du_shutdown,
>   .driver = {
>   .name   = "rcar-du",
>   .pm = _du_pm_ops,
> 


[PATCH v6 3/3] drm/i915/ttm: Use TTM for system memory

2021-06-21 Thread Thomas Hellström
For discrete, use TTM for both cached and WC system memory. That means
we currently rely on the TTM memory accounting / shrinker. For cached
system memory we should consider remaining shmem-backed, which can be
implemented from our ttm_tt_populate callback. We can then also reuse our
own very elaborate shrinker for that memory.

Signed-off-by: Thomas Hellström 
Reviewed-by: Matthew Auld 
---
v2:
- Fix IS_ERR_OR_NULL() check to IS_ERR() (Reported by Matthew Auld)
v3:
- Commit message typo fix
v6:
- Fix TODO:s for supporting system memory with TTM.
- Update the object GEM region after a TTM move if compatible.
- Add a couple of warnings for shmem on DGFX.
---
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c  |  3 ++
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c| 51 +-
 drivers/gpu/drm/i915/i915_drv.h|  3 --
 drivers/gpu/drm/i915/intel_memory_region.c |  7 ++-
 drivers/gpu/drm/i915/intel_memory_region.h |  8 
 5 files changed, 58 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c 
b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 7aa1c95c7b7d..3648ae1d6628 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -284,6 +284,7 @@ __i915_gem_object_release_shmem(struct drm_i915_gem_object 
*obj,
bool needs_clflush)
 {
GEM_BUG_ON(obj->mm.madv == __I915_MADV_PURGED);
+   GEM_WARN_ON(IS_DGFX(to_i915(obj->base.dev)));
 
if (obj->mm.madv == I915_MADV_DONTNEED)
obj->mm.dirty = false;
@@ -302,6 +303,7 @@ void i915_gem_object_put_pages_shmem(struct 
drm_i915_gem_object *obj, struct sg_
struct pagevec pvec;
struct page *page;
 
+   GEM_WARN_ON(IS_DGFX(to_i915(obj->base.dev)));
__i915_gem_object_release_shmem(obj, pages, true);
 
i915_gem_gtt_finish_pages(obj, pages);
@@ -560,6 +562,7 @@ i915_gem_object_create_shmem_from_data(struct 
drm_i915_private *dev_priv,
resource_size_t offset;
int err;
 
+   GEM_WARN_ON(IS_DGFX(dev_priv));
obj = i915_gem_object_create_shmem(dev_priv, round_up(size, PAGE_SIZE));
if (IS_ERR(obj))
return obj;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c 
b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index 966b292d07da..07097f150065 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -286,6 +286,25 @@ static void i915_ttm_adjust_gem_after_move(struct 
drm_i915_gem_object *obj)
 {
struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
unsigned int cache_level;
+   unsigned int i;
+
+   /*
+* If object was moved to an allowable region, update the object
+* region to consider it migrated. Note that if it's currently not
+* in an allowable region, it's evicted and we don't update the
+* object region.
+*/
+   if (intel_region_to_ttm_type(obj->mm.region) != bo->resource->mem_type) 
{
+   for (i = 0; i < obj->mm.n_placements; ++i) {
+   struct intel_memory_region *mr = obj->mm.placements[i];
+
+   if (intel_region_to_ttm_type(mr) == 
bo->resource->mem_type &&
+   mr != obj->mm.region) {
+   intel_memory_region_put(obj->mm.region);
+   obj->mm.region = intel_memory_region_get(mr);
+   }
+   }
+   }
 
obj->mem_flags &= ~(I915_BO_FLAG_STRUCT_PAGE | I915_BO_FLAG_IOMEM);
 
@@ -615,13 +634,6 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object 
*obj)
/* Move to the requested placement. */
i915_ttm_placement_from_obj(obj, , busy, );
 
-   /*
-* For now we support LMEM only with TTM.
-* TODO: Remove with system support
-*/
-   GEM_BUG_ON(requested.mem_type < I915_PL_LMEM0 ||
-  busy[0].mem_type < I915_PL_LMEM0);
-
/* First try only the requested placement. No eviction. */
real_num_busy = fetch_and_zero(_busy_placement);
ret = ttm_bo_validate(bo, , );
@@ -635,9 +647,6 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object 
*obj)
ret == -EAGAIN)
return ret;
 
-   /* TODO: Remove this when we support system as TTM. */
-   real_num_busy = 1;
-
/*
 * If the initial attempt fails, allow all accepted placements,
 * evicting if necessary.
@@ -872,3 +881,25 @@ int __i915_gem_ttm_object_init(struct intel_memory_region 
*mem,
 
return 0;
 }
+
+static const struct intel_memory_region_ops ttm_system_region_ops = {
+   .init_object = __i915_gem_ttm_object_init,
+};
+
+struct intel_memory_region *
+i915_gem_ttm_system_setup(struct drm_i915_private *i915,
+ u16 type, u16 instance)
+{
+   struct intel_memory_region *mr;
+
+ 

[PATCH v6 2/3] drm/i915/ttm: Adjust gem flags and caching settings after a move

2021-06-21 Thread Thomas Hellström
After a TTM move or object init we need to update the i915 gem flags and
caching settings to reflect the new placement. Currently caching settings
are not changed during the lifetime of an object, although that might
change moving forward if we run into performance issues or issues with
WC system page allocations.
Also introduce gpu_binds_iomem() and cpu_maps_iomem() to clean up the
various ways we previously used to detect this.
Finally, initialize the TTM object reserved to be able to update
flags and caching before anyone else gets hold of the object.

Signed-off-by: Thomas Hellström 
Reviewed-by: Matthew Auld 

v6:
- Rebase on accelerated ttm moves.
---
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 143 ++--
 1 file changed, 107 insertions(+), 36 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c 
b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index b5dd3b7037f4..966b292d07da 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -91,6 +91,26 @@ static int i915_ttm_err_to_gem(int err)
return err;
 }
 
+static bool gpu_binds_iomem(struct ttm_resource *mem)
+{
+   return mem->mem_type != TTM_PL_SYSTEM;
+}
+
+static bool cpu_maps_iomem(struct ttm_resource *mem)
+{
+   /* Once / if we support GGTT, this is also false for cached ttm_tts */
+   return mem->mem_type != TTM_PL_SYSTEM;
+}
+
+static enum i915_cache_level
+i915_ttm_cache_level(struct drm_i915_private *i915, struct ttm_resource *res,
+struct ttm_tt *ttm)
+{
+   return ((HAS_LLC(i915) || HAS_SNOOP(i915)) && !gpu_binds_iomem(res) &&
+   ttm->caching == ttm_cached) ? I915_CACHE_LLC :
+   I915_CACHE_NONE;
+}
+
 static void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj);
 
 static enum ttm_caching
@@ -248,6 +268,35 @@ static void i915_ttm_free_cached_io_st(struct 
drm_i915_gem_object *obj)
obj->ttm.cached_io_st = NULL;
 }
 
+static void
+i915_ttm_adjust_domains_after_move(struct drm_i915_gem_object *obj)
+{
+   struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
+
+   if (cpu_maps_iomem(bo->resource) || bo->ttm->caching != ttm_cached) {
+   obj->write_domain = I915_GEM_DOMAIN_WC;
+   obj->read_domains = I915_GEM_DOMAIN_WC;
+   } else {
+   obj->write_domain = I915_GEM_DOMAIN_CPU;
+   obj->read_domains = I915_GEM_DOMAIN_CPU;
+   }
+}
+
+static void i915_ttm_adjust_gem_after_move(struct drm_i915_gem_object *obj)
+{
+   struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
+   unsigned int cache_level;
+
+   obj->mem_flags &= ~(I915_BO_FLAG_STRUCT_PAGE | I915_BO_FLAG_IOMEM);
+
+   obj->mem_flags |= cpu_maps_iomem(bo->resource) ? I915_BO_FLAG_IOMEM :
+   I915_BO_FLAG_STRUCT_PAGE;
+
+   cache_level = i915_ttm_cache_level(to_i915(bo->base.dev), bo->resource,
+  bo->ttm);
+   i915_gem_object_set_cache_coherency(obj, cache_level);
+}
+
 static void i915_ttm_purge(struct drm_i915_gem_object *obj)
 {
struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
@@ -263,8 +312,10 @@ static void i915_ttm_purge(struct drm_i915_gem_object *obj)
 
/* TTM's purge interface. Note that we might be reentering. */
ret = ttm_bo_validate(bo, , );
-
if (!ret) {
+   obj->write_domain = 0;
+   obj->read_domains = 0;
+   i915_ttm_adjust_gem_after_move(obj);
i915_ttm_free_cached_io_st(obj);
obj->mm.madv = __I915_MADV_PURGED;
}
@@ -347,12 +398,15 @@ i915_ttm_resource_get_st(struct drm_i915_gem_object *obj,
 struct ttm_resource *res)
 {
struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
-   struct ttm_resource_manager *man =
-   ttm_manager_type(bo->bdev, res->mem_type);
 
-   if (man->use_tt)
+   if (!gpu_binds_iomem(res))
return i915_ttm_tt_get_st(bo->ttm);
 
+   /*
+* If CPU mapping differs, we need to add the ttm_tt pages to
+* the resulting st. Might make sense for GGTT.
+*/
+   GEM_WARN_ON(!cpu_maps_iomem(res));
return intel_region_ttm_resource_to_st(obj->mm.region, res);
 }
 
@@ -367,23 +421,25 @@ static int i915_ttm_accel_move(struct ttm_buffer_object 
*bo,
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
struct sg_table *src_st;
struct i915_request *rq;
+   struct ttm_tt *ttm = bo->ttm;
+   enum i915_cache_level src_level, dst_level;
int ret;
 
if (!i915->gt.migrate.context)
return -EINVAL;
 
-   if (!bo->ttm || !ttm_tt_is_populated(bo->ttm)) {
+   dst_level = i915_ttm_cache_level(i915, dst_mem, ttm);
+   if (!ttm || !ttm_tt_is_populated(ttm)) {
if (bo->type == ttm_bo_type_kernel)
return -EINVAL;
 
-   if (bo->ttm &&
-   

[PATCH v6 1/3] drm/i915: Update object placement flags to be mutable

2021-06-21 Thread Thomas Hellström
The object ops i915_GEM_OBJECT_HAS_IOMEM and the object
I915_BO_ALLOC_STRUCT_PAGE flags are considered immutable by
much of our code. Introduce a new mem_flags member to hold these
and make sure checks for these flags being set are either done
under the object lock or with pages properly pinned. The flags
will change during migration under the object lock.

Signed-off-by: Thomas Hellström 
Reviewed-by: Matthew Auld 
---
v2:
- Unconditionally set VM_IO on our VMAs in line with the rest core gem
  and TTM. Since the bo might be migrated while the VMA is still alive,
  there is no sense, whether or not it maps iomem might change.
v6:
- Introduce a __i915_gem_object_is_lmem() to be used in situations where we
  know that a fence that can't currently signal keeps the object from being
  migrated or evicted.
- Move a couple of shmem warnings for DGFX to a later patch where we
  actually move system memory to TTM.
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |  4 +-
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c  | 22 +++
 drivers/gpu/drm/i915/gem/i915_gem_lmem.h  |  2 +
 drivers/gpu/drm/i915/gem/i915_gem_mman.c  | 12 +++---
 drivers/gpu/drm/i915/gem/i915_gem_object.c| 38 +++
 drivers/gpu/drm/i915/gem/i915_gem_object.h| 14 ++-
 .../gpu/drm/i915/gem/i915_gem_object_types.h  | 20 +-
 drivers/gpu/drm/i915/gem/i915_gem_pages.c |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_phys.c  |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c |  7 ++--
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c   |  2 +-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |  4 +-
 .../drm/i915/gem/selftests/huge_gem_object.c  |  4 +-
 .../gpu/drm/i915/gem/selftests/huge_pages.c   |  5 +--
 .../drm/i915/gem/selftests/i915_gem_mman.c|  4 +-
 .../drm/i915/gem/selftests/i915_gem_phys.c|  3 +-
 drivers/gpu/drm/i915/i915_gpu_error.c |  2 +-
 17 files changed, 101 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c 
b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index ce6b664b10aa..13b217f75055 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -177,8 +177,8 @@ i915_gem_object_create_internal(struct drm_i915_private 
*i915,
return ERR_PTR(-ENOMEM);
 
drm_gem_private_object_init(>drm, >base, size);
-   i915_gem_object_init(obj, _gem_object_internal_ops, _class,
-I915_BO_ALLOC_STRUCT_PAGE);
+   i915_gem_object_init(obj, _gem_object_internal_ops, _class, 
0);
+   obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
 
/*
 * Mark the object as volatile, such that the pages are marked as
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c 
b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
index d539dffa1554..41d5182cd367 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
@@ -71,6 +71,28 @@ bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj)
  mr->type == INTEL_MEMORY_STOLEN_LOCAL);
 }
 
+/**
+ * __i915_gem_object_is_lmem - Whether the object is resident in
+ * lmem while in the fence signaling critical path.
+ * @obj: The object to check.
+ *
+ * This function is intended to be called from within the fence signaling
+ * path where the fence keeps the object from being migrated. For example
+ * during gpu reset or similar.
+ *
+ * Return: Whether the object is resident in lmem.
+ */
+bool __i915_gem_object_is_lmem(struct drm_i915_gem_object *obj)
+{
+   struct intel_memory_region *mr = READ_ONCE(obj->mm.region);
+
+#ifdef CONFIG_LOCKDEP
+   GEM_WARN_ON(dma_resv_test_signaled(obj->base.resv, true));
+#endif
+   return mr && (mr->type == INTEL_MEMORY_LOCAL ||
+ mr->type == INTEL_MEMORY_STOLEN_LOCAL);
+}
+
 struct drm_i915_gem_object *
 i915_gem_object_create_lmem(struct drm_i915_private *i915,
resource_size_t size,
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h 
b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h
index ea76fd11ccb0..27a611deba47 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h
@@ -21,6 +21,8 @@ i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj,
 
 bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj);
 
+bool __i915_gem_object_is_lmem(struct drm_i915_gem_object *obj);
+
 struct drm_i915_gem_object *
 i915_gem_object_create_lmem(struct drm_i915_private *i915,
resource_size_t size,
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c 
b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index 2fd155742bd2..6497a2dbdab9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -684,7 +684,7 @@ __assign_mmap_offset(struct drm_i915_gem_object *obj,
 
if (mmap_type != I915_MMAP_TYPE_GTT &&

[PATCH v6 0/3] drm/i915: Move system memory to TTM for discrete

2021-06-21 Thread Thomas Hellström
Early implementation of moving system memory for discrete cards over to
TTM. We first add the notion of objects being migratable under the object
lock to i915 gem, and add some asserts to verify that objects are either
locked or pinned when the placement is checked by the gem code.

Patch 2 deals with updating the i915 gem bookkeeping after a TTM move,
Patch 3 moves system over from shmem to TTM for discrete

Note that the mock device doesn't consider itself discrete so the TTM
system path is not checked by the mock selftests.

v2:
- Style fixes (reported by Matthew Auld)
- Drop the last patch (migration) It needs selftests and some additional work.
- Unconditionally add VM_IO at mmap time.

v3:
- More style fixes (reported by Matthew Auld)
- Don't overfill the busy placement vector (reported by Matthew Auld)

v4:
- Remove confusion around shrinkable objects (reported by Matthew Auld)

v5:
- Remove confusion around shrinkable objects again, but this time in the
  correct patch. (reported by Matthew Auld)

v6:
- One patch already committed.
- Introduce a __i915_gem_object_is_lmem() to be used in situations where we
  know that a fence that can't currently signal keeps the object from being
  migrated or evicted.
- Rebase on accelerated TTM moves
- Fix TODO:s for supporting system memory with TTM.
- Update the object GEM region after a TTM move if compatible.
- Move a couple of warnings for shmem on DGFX.

Thomas Hellström (3):
  drm/i915: Update object placement flags to be mutable
  drm/i915/ttm: Adjust gem flags and caching settings after a move
  drm/i915/ttm: Use TTM for system memory

 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   4 +-
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c  |  22 ++
 drivers/gpu/drm/i915/gem/i915_gem_lmem.h  |   2 +
 drivers/gpu/drm/i915/gem/i915_gem_mman.c  |  12 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.c|  38 
 drivers/gpu/drm/i915/gem/i915_gem_object.h|  14 +-
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  20 +-
 drivers/gpu/drm/i915/gem/i915_gem_pages.c |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_phys.c  |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c |  10 +-
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c   | 194 +-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |   4 +-
 .../drm/i915/gem/selftests/huge_gem_object.c  |   4 +-
 .../gpu/drm/i915/gem/selftests/huge_pages.c   |   5 +-
 .../drm/i915/gem/selftests/i915_gem_mman.c|   4 +-
 .../drm/i915/gem/selftests/i915_gem_phys.c|   3 +-
 drivers/gpu/drm/i915/i915_drv.h   |   3 -
 drivers/gpu/drm/i915/i915_gpu_error.c |   2 +-
 drivers/gpu/drm/i915/intel_memory_region.c|   7 +-
 drivers/gpu/drm/i915/intel_memory_region.h|   8 +
 20 files changed, 265 insertions(+), 95 deletions(-)

-- 
2.31.1



Re: [PATCH 0/5] ti-sn65dsi83: Finalize transition to atomic operations

2021-06-21 Thread Sam Ravnborg
Hi Laurent,

> > 
> > It is news to me that the atomic ops are the way to go - but then I have
> > been off-line for a while so no suprise or maybe I just missed it
> > before.
> 
> They're not mandatory as such, but they give us access to the atomic
> state, which is sometimes required. Overall I think it would be nice to
> move to the atomic operations and drop the legacy ones, to avoid
> maintaining two sets of operations. It will take time :-)
Yeah, but if we can get more people working on the job..
> 
> > It would be good if the comments in drm_bridge.h could point out what is
> > deprecated, so we know what to avoid in new and updated bridge drivers.
> > But this is all un-related to this series.
> 
> It's a good point. Would you like to submit a patch, or should I do so ?
Please do as I would have to dig around to do it right as I have
fogotten most of the drm internals the last couple of months.

Just something simple like: "This is deprecated, do not use!" would do
the trick for me. Then I would know what to look for if I was reviewing
a new bridge driver or patching an existing one or just trying to gentle
push someone in the right direction.

For drm_drv.h this really helped me to understand what should not be
used.

Sam


[Bug 213391] AMDGPU retries page fault with some specific processes amdgpu and sometimes followed [gfxhub0] retry page fault until *ERROR* ring gfx timeout, but soft recovered

2021-06-21 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=213391

--- Comment #25 from Leandro Jacques (ls...@yahoo.com) ---
(In reply to Dominic Letz from comment #21)

Trying the same version linux firmware 20210315. Let's check how it goes

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

Re: [PATCH v3 1/2] habanalabs: define uAPI to export FD for DMA-BUF

2021-06-21 Thread Oded Gabbay
On Mon, Jun 21, 2021 at 9:27 PM Daniel Vetter  wrote:
>
> On Mon, Jun 21, 2021 at 7:55 PM Jason Gunthorpe  wrote:
> > On Mon, Jun 21, 2021 at 07:26:14PM +0300, Oded Gabbay wrote:
> > > On Mon, Jun 21, 2021 at 5:12 PM Jason Gunthorpe  wrote:
> > > >
> > > > On Mon, Jun 21, 2021 at 03:02:10PM +0200, Greg KH wrote:
> > > > > On Mon, Jun 21, 2021 at 02:28:48PM +0200, Daniel Vetter wrote:
> > > >
> > > > > > Also I'm wondering which is the other driver that we share buffers
> > > > > > with. The gaudi stuff doesn't have real struct pages as backing
> > > > > > storage, it only fills out the dma_addr_t. That tends to blow up 
> > > > > > with
> > > > > > other drivers, and the only place where this is guaranteed to work 
> > > > > > is
> > > > > > if you have a dynamic importer which sets the allow_peer2peer flag.
> > > > > > Adding maintainers from other subsystems who might want to chime in
> > > > > > here. So even aside of the big question as-is this is broken.
> > > > >
> > > > > From what I can tell this driver is sending the buffers to other
> > > > > instances of the same hardware,
> > > >
> > > > A dmabuf is consumed by something else in the kernel calling
> > > > dma_buf_map_attachment() on the FD.
> > > >
> > > > What is the other side of this? I don't see any
> > > > dma_buf_map_attachment() calls in drivers/misc, or added in this patch
> > > > set.
> > >
> > > This patch-set is only to enable the support for the exporter side.
> > > The "other side" is any generic RDMA networking device that will want
> > > to perform p2p communication over PCIe with our GAUDI accelerator.
> > > An example is indeed the mlnx5 card which has already integrated
> > > support for being an "importer".
> >
> > It raises the question of how you are testing this if you aren't using
> > it with the only intree driver: mlx5.
>
> For p2p dma-buf there's also amdgpu as a possible in-tree candiate
> driver, that's why I added amdgpu folks. Otoh I'm not aware of AI+GPU
> combos being much in use, at least with upstream gpu drivers (nvidia
> blob is a different story ofc, but I don't care what they do in their
> own world).
> -Daniel
> --
We have/are doing three things:
1. I wrote a simple "importer" driver that emulates an RDMA driver. It
calls all the IB_UMEM_DMABUF functions, same as the mlnx5 driver does.
And instead of using h/w, it accesses the bar directly. We wrote
several tests that emulated the real application. i.e. asking the
habanalabs driver to create dma-buf object and export its FD back to
userspace. Then the userspace sends the FD to the "importer" driver,
which attaches to it, get the SG list and accesses the memory on the
GAUDI device. This gave me the confidence that how we integrated the
exporter is basically correct/working.

2. We are trying to do a POC with a MLNX card we have, WIP.

3. We are working with another 3rd party RDMA device that its driver
is now adding support for being an "importer". also WIP

In both points 2&3 We haven't yet reached the actual stage of checking
this feature.

Another thing I want to emphasize is that we are doing p2p only
through the export/import of the FD. We do *not* allow the user to
mmap the dma-buf as we do not support direct IO. So there is no access
to these pages through the userspace.

Thanks,
Oded


Re: [PATCH 0/5] ti-sn65dsi83: Finalize transition to atomic operations

2021-06-21 Thread Laurent Pinchart
Hi Sam,

On Mon, Jun 21, 2021 at 08:49:53PM +0200, Sam Ravnborg wrote:
> On Mon, Jun 21, 2021 at 03:55:13PM +0300, Laurent Pinchart wrote:
> > Hello,
> > 
> > This patch series is based on top of "[PATCH] drm/bridge: ti-sn65dsi83:
> > Replace connector format patching with atomic_get_input_bus_fmts". It
> > completes the transition to atomic operations in the ti-sn65dsi83
> > driver. The main reason for this change is patch 4/5 that fixes a few
> > issues in the driver (see the patch's commit message for details), but
> > overall it also brings the driver to the most recent API which is nice
> > in itself.
> > 
> > Laurent Pinchart (5):
> >   drm: bridge: ti-sn65dsi83: Move LVDS format selection to .mode_set()
> >   drm: bridge: ti-sn65dsi83: Pass mode explicitly to helper functions
> >   drm: bridge: ti-sn65dsi83: Switch to atomic operations
> >   drm: bridge: ti-sn65dsi83: Retrieve output format from bridge state
> >   drm: bridge: ti-sn65dsi83: Retrieve the display mode from the state
> > 
> >  drivers/gpu/drm/bridge/ti-sn65dsi83.c | 166 +-
> >  1 file changed, 82 insertions(+), 84 deletions(-)
> 
> I have browsed the series and it all looked good.
> Acked-by: Sam Ravnborg 
> 
> on them all.
> 
> It is news to me that the atomic ops are the way to go - but then I have
> been off-line for a while so no suprise or maybe I just missed it
> before.

They're not mandatory as such, but they give us access to the atomic
state, which is sometimes required. Overall I think it would be nice to
move to the atomic operations and drop the legacy ones, to avoid
maintaining two sets of operations. It will take time :-)

> It would be good if the comments in drm_bridge.h could point out what is
> deprecated, so we know what to avoid in new and updated bridge drivers.
> But this is all un-related to this series.

It's a good point. Would you like to submit a patch, or should I do so ?

-- 
Regards,

Laurent Pinchart


Re: [PATCH] drm/radeon: delete useless function return values & remove meaningless if(r) check code

2021-06-21 Thread Alex Deucher
Applied.  Thanks!

Alex

On Mon, Jun 21, 2021 at 9:15 AM Christian König
 wrote:
>
> Am 21.06.21 um 15:05 schrieb Bernard Zhao:
> > Function radeon_fence_driver_init always returns success,
> > the function type maybe coule be changed to void.
> > This patch first delete the check of the return
> > value of the function call radeon_fence_driver_init, then,
> > optimise the function declaration and function to void type.
> >
> > Signed-off-by: Bernard Zhao 
>
> Reviewed-by: Christian König 
>
> > ---
> >   drivers/gpu/drm/radeon/cik.c  | 4 +---
> >   drivers/gpu/drm/radeon/evergreen.c| 4 +---
> >   drivers/gpu/drm/radeon/ni.c   | 4 +---
> >   drivers/gpu/drm/radeon/r100.c | 4 +---
> >   drivers/gpu/drm/radeon/r300.c | 4 +---
> >   drivers/gpu/drm/radeon/r420.c | 5 +
> >   drivers/gpu/drm/radeon/r520.c | 4 +---
> >   drivers/gpu/drm/radeon/r600.c | 4 +---
> >   drivers/gpu/drm/radeon/radeon.h   | 2 +-
> >   drivers/gpu/drm/radeon/radeon_fence.c | 5 +
> >   drivers/gpu/drm/radeon/rs400.c| 4 +---
> >   drivers/gpu/drm/radeon/rs600.c| 4 +---
> >   drivers/gpu/drm/radeon/rs690.c| 4 +---
> >   drivers/gpu/drm/radeon/rv515.c| 4 +---
> >   drivers/gpu/drm/radeon/rv770.c| 4 +---
> >   drivers/gpu/drm/radeon/si.c   | 4 +---
> >   16 files changed, 16 insertions(+), 48 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
> > index 42a8afa839cb..f6cf0b8fdd83 100644
> > --- a/drivers/gpu/drm/radeon/cik.c
> > +++ b/drivers/gpu/drm/radeon/cik.c
> > @@ -8584,9 +8584,7 @@ int cik_init(struct radeon_device *rdev)
> >   radeon_get_clock_info(rdev->ddev);
> >
> >   /* Fence driver */
> > - r = radeon_fence_driver_init(rdev);
> > - if (r)
> > - return r;
> > + radeon_fence_driver_init(rdev);
> >
> >   /* initialize memory controller */
> >   r = cik_mc_init(rdev);
> > diff --git a/drivers/gpu/drm/radeon/evergreen.c 
> > b/drivers/gpu/drm/radeon/evergreen.c
> > index 8e9e88bf1f43..36a888e1b179 100644
> > --- a/drivers/gpu/drm/radeon/evergreen.c
> > +++ b/drivers/gpu/drm/radeon/evergreen.c
> > @@ -5208,9 +5208,7 @@ int evergreen_init(struct radeon_device *rdev)
> >   /* Initialize clocks */
> >   radeon_get_clock_info(rdev->ddev);
> >   /* Fence driver */
> > - r = radeon_fence_driver_init(rdev);
> > - if (r)
> > - return r;
> > + radeon_fence_driver_init(rdev);
> >   /* initialize AGP */
> >   if (rdev->flags & RADEON_IS_AGP) {
> >   r = radeon_agp_init(rdev);
> > diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
> > index ab7bd3080217..4a364ca7a1be 100644
> > --- a/drivers/gpu/drm/radeon/ni.c
> > +++ b/drivers/gpu/drm/radeon/ni.c
> > @@ -2375,9 +2375,7 @@ int cayman_init(struct radeon_device *rdev)
> >   /* Initialize clocks */
> >   radeon_get_clock_info(rdev->ddev);
> >   /* Fence driver */
> > - r = radeon_fence_driver_init(rdev);
> > - if (r)
> > - return r;
> > + radeon_fence_driver_init(rdev);
> >   /* initialize memory controller */
> >   r = evergreen_mc_init(rdev);
> >   if (r)
> > diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
> > index fcfcaec25a9e..aa6800b0e198 100644
> > --- a/drivers/gpu/drm/radeon/r100.c
> > +++ b/drivers/gpu/drm/radeon/r100.c
> > @@ -4056,9 +4056,7 @@ int r100_init(struct radeon_device *rdev)
> >   /* initialize VRAM */
> >   r100_mc_init(rdev);
> >   /* Fence driver */
> > - r = radeon_fence_driver_init(rdev);
> > - if (r)
> > - return r;
> > + radeon_fence_driver_init(rdev);
> >   /* Memory manager */
> >   r = radeon_bo_init(rdev);
> >   if (r)
> > diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
> > index 92643dfdd8a8..621ff174dff3 100644
> > --- a/drivers/gpu/drm/radeon/r300.c
> > +++ b/drivers/gpu/drm/radeon/r300.c
> > @@ -1549,9 +1549,7 @@ int r300_init(struct radeon_device *rdev)
> >   /* initialize memory controller */
> >   r300_mc_init(rdev);
> >   /* Fence driver */
> > - r = radeon_fence_driver_init(rdev);
> > - if (r)
> > - return r;
> > + radeon_fence_driver_init(rdev);
> >   /* Memory manager */
> >   r = radeon_bo_init(rdev);
> >   if (r)
> > diff --git a/drivers/gpu/drm/radeon/r420.c b/drivers/gpu/drm/radeon/r420.c
> > index 1ed4407b91aa..7e6320e8c6a0 100644
> > --- a/drivers/gpu/drm/radeon/r420.c
> > +++ b/drivers/gpu/drm/radeon/r420.c
> > @@ -425,10 +425,7 @@ int r420_init(struct radeon_device *rdev)
> >   r300_mc_init(rdev);
> >   r420_debugfs(rdev);
> >   /* Fence driver */
> > - r = radeon_fence_driver_init(rdev);
> > - if (r) {
> > - return r;
> > - }
> > + radeon_fence_driver_init(rdev);
> >   /* Memory manager */
> >   r = radeon_bo_init(rdev);
> >   

[Bug 213391] AMDGPU retries page fault with some specific processes amdgpu and sometimes followed [gfxhub0] retry page fault until *ERROR* ring gfx timeout, but soft recovered

2021-06-21 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=213391

--- Comment #24 from Leandro Jacques (ls...@yahoo.com) ---
Created attachment 297557
  --> https://bugzilla.kernel.org/attachment.cgi?id=297557=edit
Firmware info

The downgrade to kernel 5.4.123 doesn't had any effect, I had the same bug. Now
I'm passing my firmware versions information.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

Re: [PATCH 0/5] ti-sn65dsi83: Finalize transition to atomic operations

2021-06-21 Thread Sam Ravnborg
Hi Laurent,

On Mon, Jun 21, 2021 at 03:55:13PM +0300, Laurent Pinchart wrote:
> Hello,
> 
> This patch series is based on top of "[PATCH] drm/bridge: ti-sn65dsi83:
> Replace connector format patching with atomic_get_input_bus_fmts". It
> completes the transition to atomic operations in the ti-sn65dsi83
> driver. The main reason for this change is patch 4/5 that fixes a few
> issues in the driver (see the patch's commit message for details), but
> overall it also brings the driver to the most recent API which is nice
> in itself.
> 
> Laurent Pinchart (5):
>   drm: bridge: ti-sn65dsi83: Move LVDS format selection to .mode_set()
>   drm: bridge: ti-sn65dsi83: Pass mode explicitly to helper functions
>   drm: bridge: ti-sn65dsi83: Switch to atomic operations
>   drm: bridge: ti-sn65dsi83: Retrieve output format from bridge state
>   drm: bridge: ti-sn65dsi83: Retrieve the display mode from the state
> 
>  drivers/gpu/drm/bridge/ti-sn65dsi83.c | 166 +-
>  1 file changed, 82 insertions(+), 84 deletions(-)

I have browsed the series and it all looked good.
Acked-by: Sam Ravnborg 

on them all.

It is news to me that the atomic ops are the way to go - but then I have
been off-line for a while so no suprise or maybe I just missed it
before.

It would be good if the comments in drm_bridge.h could point out what is
deprecated, so we know what to avoid in new and updated bridge drivers.
But this is all un-related to this series.

Sam


Re: [v7 5/5] drm/panel-simple: Add Samsung ATNA33XC20

2021-06-21 Thread Sam Ravnborg
Hi Doug,

On Mon, Jun 21, 2021 at 08:34:51AM -0700, Doug Anderson wrote:
> Hi,
> 
> On Sun, Jun 20, 2021 at 3:01 AM Sam Ravnborg  wrote:
> >
> > Hi Rajeev
> > On Sat, Jun 19, 2021 at 04:10:30PM +0530, Rajeev Nandan wrote:
> > > Add Samsung 13.3" FHD eDP AMOLED panel.
> > >
> > > Signed-off-by: Rajeev Nandan 
> > > Reviewed-by: Douglas Anderson 
> > > ---
> > >
> > > Changes in v4:
> > > - New
> > >
> > > Changes in v5:
> > > - Remove "uses_dpcd_backlight" property, not required now. (Douglas)
> > >
> > > Changes in v7:
> > > - Update disable_to_power_off and power_to_enable delays. (Douglas)
> > >
> > >  drivers/gpu/drm/panel/panel-simple.c | 33 
> > > +
> > >  1 file changed, 33 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/panel/panel-simple.c 
> > > b/drivers/gpu/drm/panel/panel-simple.c
> > > index 86e5a45..4adc44a 100644
> > > --- a/drivers/gpu/drm/panel/panel-simple.c
> > > +++ b/drivers/gpu/drm/panel/panel-simple.c
> > > @@ -3562,6 +3562,36 @@ static const struct panel_desc 
> > > rocktech_rk101ii01d_ct = {
> > >   .connector_type = DRM_MODE_CONNECTOR_LVDS,
> > >  };
> > >
> > > +static const struct drm_display_mode samsung_atna33xc20_mode = {
> > > + .clock = 138770,
> > > + .hdisplay = 1920,
> > > + .hsync_start = 1920 + 48,
> > > + .hsync_end = 1920 + 48 + 32,
> > > + .htotal = 1920 + 48 + 32 + 80,
> > > + .vdisplay = 1080,
> > > + .vsync_start = 1080 + 8,
> > > + .vsync_end = 1080 + 8 + 8,
> > > + .vtotal = 1080 + 8 + 8 + 16,
> > > + .flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_NVSYNC,
> > > +};
> > > +
> > > +static const struct panel_desc samsung_atna33xc20 = {
> > > + .modes = _atna33xc20_mode,
> > > + .num_modes = 1,
> > > + .bpc = 10,
> > > + .size = {
> > > + .width = 294,
> > > + .height = 165,
> > > + },
> > > + .delay = {
> > > + .disable_to_power_off = 200,
> > > + .power_to_enable = 400,
> > > + .hpd_absent_delay = 200,
> > > + .unprepare = 500,
> > > + },
> > > + .connector_type = DRM_MODE_CONNECTOR_eDP,
> > > +};
> >
> > bus_format is missing. There should be a warning about this when you
> > probe the display.
> 
> Sam: I'm curious about the requirement of hardcoding bus_format like
> this for eDP panels. Most eDP panels support a variety of bits per
> pixel and do so dynamically. Ones I've poked at freely support 6bpp
> and 8bpp. Presumably this one supports both of those modes and also
> 10bpp. I haven't done detailed research on it, but it would also
> surprise me if the "bus format" for a given bpp needed to be specified
> for eDP. Presumably since eDP has most of the "autodetect" type
> features of DP then if the format needed to be accounted for that you
> could query the hardware?
> 
> Looking at the datasheet for the ti-sn65dsi86 MIPI-to-eDP bridge chip
> I see that it explicitly calls out the bus formats that it supports
> for the MIPI side but doesn't call out anything for eDP. That would
> tend to support my belief that there isn't variance on the eDP side...
> 
> Maybe the right fix is to actually change the check not to give a
> warning for eDP panels? ...or am I misunderstanding?

I have never dived into the datasheets of eDP panels so I do not know.
The checks were added based on what we had in-tree and it is no suprise
if they need an update or are just plain wrong.
I expect you to be in a better position to make the call here - but we
should not add panels that triggers warnings so either fix the warnings
or fix the panel description.

Sam


Re: [v7 1/5] drm/panel: add basic DP AUX backlight support

2021-06-21 Thread Sam Ravnborg
Hi Rajeev,

On Mon, Jun 21, 2021 at 02:08:17PM +0530, rajee...@codeaurora.org wrote:
> Hi Sam,
> 
> On 20-06-2021 15:01, Sam Ravnborg wrote:
> > Hi Rajeev
> > 
> > On Sat, Jun 19, 2021 at 04:10:26PM +0530, Rajeev Nandan wrote:
> > > Some panels support backlight control over DP AUX channel using
> > > VESA's standard backlight control interface.
> > > Using new DRM eDP backlight helpers, add support to create and
> > > register a backlight for those panels in drm_panel to simplify
> > > the panel drivers.
> > > 
> > > The panel driver with access to "struct drm_dp_aux" can create and
> > > register a backlight device using following code snippet in its
> > > probe() function:
> > > 
> > >   err = drm_panel_dp_aux_backlight(panel, aux);
> > >   if (err)
> > >   return err;
> > 
> > IT very good to have this supported by drm_panel, so we avoid
> > bolierplate in various drivers.
> > 
> > > 
> > > Then drm_panel will handle backlight_(enable|disable) calls
> > > similar to the case when drm_panel_of_backlight() is used.
> > > 
> > > Currently, we are not supporting one feature where the source
> > > device can combine the backlight brightness levels set through
> > > DP AUX and the BL_PWM_DIM eDP connector pin. Since it's not
> > > required for the basic backlight controls, it can be added later.
> > > 
> > > Signed-off-by: Rajeev Nandan 
> > > Reviewed-by: Douglas Anderson 
> > > Reviewed-by: Lyude Paul 
> > > ---
> > > 
> > > (no changes since v6)
> > > 
> > > Changes in v5:
> > > - New
> > > 
> > > Changes in v6:
> > > - Fixed ordering of memory allocation (Douglas)
> > > - Updated word wrapping in a comment (Douglas)
> > > 
> > >  drivers/gpu/drm/drm_panel.c | 108
> > > 
> > >  include/drm/drm_panel.h |  15 --
> > >  2 files changed, 119 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/drm_panel.c b/drivers/gpu/drm/drm_panel.c
> > > index f634371..9e65342 100644
> > > --- a/drivers/gpu/drm/drm_panel.c
> > > +++ b/drivers/gpu/drm/drm_panel.c
> > > @@ -26,12 +26,20 @@
> > >  #include 
> > > 
> > >  #include 
> > > +#include 
> > >  #include 
> > >  #include 
> > > 
> > >  static DEFINE_MUTEX(panel_lock);
> > >  static LIST_HEAD(panel_list);
> > > 
> > > +struct dp_aux_backlight {
> > > + struct backlight_device *base;
> > > + struct drm_dp_aux *aux;
> > > + struct drm_edp_backlight_info info;
> > > + bool enabled;
> > > +};
> > > +
> > >  /**
> > >   * DOC: drm panel
> > >   *
> > > @@ -342,6 +350,106 @@ int drm_panel_of_backlight(struct drm_panel
> > > *panel)
> > >   return 0;
> > >  }
> > >  EXPORT_SYMBOL(drm_panel_of_backlight);
> > > +
> > > +static int dp_aux_backlight_update_status(struct backlight_device
> > > *bd)
> > > +{
> > > + struct dp_aux_backlight *bl = bl_get_data(bd);
> > > + u16 brightness = backlight_get_brightness(bd);
> > backlight_get_brightness() returns an int, so using u16 seems wrong.
> > But then drm_edp_backlight_enable() uses u16 for level - so I guess it
> > is OK.
> > We use unsigned long, int, u16 for brightness. Looks like something one
> > could look at one day, but today is not that day.
> > 
> > > + int ret = 0;
> > > +
> > > + if (brightness > 0) {
> > Use backlight_is_blank(bd) here, as this is really what you test for.
> 
> The backlight_get_brightness() used above has the backlight_is_blank() check
> and returns brightness 0 when the backlight_is_blank(bd) is true.
> So, instead of calling backlight_is_blank(bd), we are checking brightness
> value here.
> I took the reference from pwm_backlight_update_status() of the PWM backlight
> driver (drivers/video/backlight/pwm_bl.c)
> 
> Yes, we can change this _if_ condition to use backlight_is_blank(bd), as
> this is an inline function, and is more meaningful.
> With this, there would be one change in the behavior of
> _backlight_update_status function in the following case:
> 
> - Setting brightness=0 when the backlight is not blank:
> In the current case setting brightness=0 is disabling the backlight.
> In the new case, setting brightness=0 will set the brightness to 0 and will
> do nothing to backlight disable.
> 
> I think that should not be a problem?

Reading "ABI/stable/sysfs-class-backlight" does not say anything about
any special bahaviour with brightness == 0. Some panels may not dim back
to dark on brightness == 0, just barely readable. So in such cases doing
something special on the brightness == 0 case is wrong.

I recall that some backlight drivets do not get this so it is easy to
make it wrong. Unless one of the backlight people tell otherwise the
safe choice is to avoid the specail handling of brightness here.

> 
> > 
> > I cannot see why you need the extra check on ->enabled?
> > Would it be sufficient to check backlight_is_blank() only?
> 
> This extra check on bl->enabled flag is added to avoid enabling/disabling
> backlight again if it is already enabled/disabled.
> Using this flag way can know the transition between 

Re: [RESEND PATCH 1/3] drm/panel: Add connector_type and bus_format for AUO G104SN02 V2 panel

2021-06-21 Thread Sam Ravnborg
Hi Stefan.

On Mon, Jun 21, 2021 at 05:09:28PM +0200, Stefan Riedmueller wrote:
> The AUO G104SN02 V2 is an LVDS display which supports 6 and 8 bpc PSWG.
> Add the corresponding connector type and 8 bpc as default bus_format.
> 
> Signed-off-by: Stefan Riedmueller 
> Reviewed-by: Laurent Pinchart 
> ---
> Hi,
> I added the reviewed-by tag from Laurent Pinchart for the RESEND, hope
> that is ok.
> https://lore.kernel.org/dri-devel/ynchyskddg%2fjs...@pendragon.ideasonboard.com/
Thanks, thats a help so I did not have to add it.
All three patches applied to drm-misc-next now.

Sam


Re: [PATCH v3 1/2] habanalabs: define uAPI to export FD for DMA-BUF

2021-06-21 Thread Daniel Vetter
On Mon, Jun 21, 2021 at 7:55 PM Jason Gunthorpe  wrote:
> On Mon, Jun 21, 2021 at 07:26:14PM +0300, Oded Gabbay wrote:
> > On Mon, Jun 21, 2021 at 5:12 PM Jason Gunthorpe  wrote:
> > >
> > > On Mon, Jun 21, 2021 at 03:02:10PM +0200, Greg KH wrote:
> > > > On Mon, Jun 21, 2021 at 02:28:48PM +0200, Daniel Vetter wrote:
> > >
> > > > > Also I'm wondering which is the other driver that we share buffers
> > > > > with. The gaudi stuff doesn't have real struct pages as backing
> > > > > storage, it only fills out the dma_addr_t. That tends to blow up with
> > > > > other drivers, and the only place where this is guaranteed to work is
> > > > > if you have a dynamic importer which sets the allow_peer2peer flag.
> > > > > Adding maintainers from other subsystems who might want to chime in
> > > > > here. So even aside of the big question as-is this is broken.
> > > >
> > > > From what I can tell this driver is sending the buffers to other
> > > > instances of the same hardware,
> > >
> > > A dmabuf is consumed by something else in the kernel calling
> > > dma_buf_map_attachment() on the FD.
> > >
> > > What is the other side of this? I don't see any
> > > dma_buf_map_attachment() calls in drivers/misc, or added in this patch
> > > set.
> >
> > This patch-set is only to enable the support for the exporter side.
> > The "other side" is any generic RDMA networking device that will want
> > to perform p2p communication over PCIe with our GAUDI accelerator.
> > An example is indeed the mlnx5 card which has already integrated
> > support for being an "importer".
>
> It raises the question of how you are testing this if you aren't using
> it with the only intree driver: mlx5.

For p2p dma-buf there's also amdgpu as a possible in-tree candiate
driver, that's why I added amdgpu folks. Otoh I'm not aware of AI+GPU
combos being much in use, at least with upstream gpu drivers (nvidia
blob is a different story ofc, but I don't care what they do in their
own world).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


[no subject]

2021-06-21 Thread shashank singh
Hello everyone, my name is Shashank Singh. I hope this is the right
platform to reach out to the 'X.org' community. I was curious about the
X.org Endless Vacation of Code. Is this program still active?


Re: [PATCH v2 2/2] drm/panfrost: Queue jobs on the hardware

2021-06-21 Thread Alyssa Rosenzweig
> Also that feature was only introduced in t76x. So relying on that would
> sadly kill off support for t60x, t62x and t72x (albeit I'm not sure how
> 'supported' these are with Mesa anyway).

t60x and t62x are not supported, but t720 very much is (albeit GLES2
only, versus t760+ getting GLES3.1 and soon Vulkan)... t720 has
deqp-gles2 in CI and is ~close to passing everything... Please don't
break t720 :)


Re: [PATCH v13 01/12] swiotlb: Refactor swiotlb init functions

2021-06-21 Thread Stefano Stabellini
On Fri, 18 Jun 2021, Christoph Hellwig wrote:
> On Fri, Jun 18, 2021 at 09:09:17AM -0500, Tom Lendacky wrote:
> > > swiotlb_init_with_tbl uses memblock_alloc to allocate the io_tlb_mem
> > > and memblock_alloc[1] will do memset in memblock_alloc_try_nid[2], so
> > > swiotlb_init_with_tbl is also good.
> > > I'm happy to add the memset in swiotlb_init_io_tlb_mem if you think
> > > it's clearer and safer.
> > 
> > On x86, if the memset is done before set_memory_decrypted() and memory
> > encryption is active, then the memory will look like ciphertext afterwards
> > and not be zeroes. If zeroed memory is required, then a memset must be
> > done after the set_memory_decrypted() calls.
> 
> Which should be fine - we don't care that the memory is cleared to 0,
> just that it doesn't leak other data.  Maybe a comment would be useful,
> though,

Just as a clarification: I was referring to the zeroing of "mem" in
swiotlb_late_init_with_tbl and swiotlb_init_with_tbl. While it looks
like Tom and Christoph are talking about the zeroing of "tlb".

The zeroing of "mem" is required as some fields of struct io_tlb_mem
need to be initialized to zero. While the zeroing of "tlb" is not
required except from the point of view of not leaking sensitive data as
Christoph mentioned.

Either way, Claire's v14 patch 01/12 looks fine to me.


Re: [RFC PATCH 1/9] dt-bindings: display: bridge: Add Samsung SEC MIPI DSIM bindings

2021-06-21 Thread Laurent Pinchart
Hi Jagan,

Thank you for the patch.

On Mon, Jun 21, 2021 at 12:54:16PM +0530, Jagan Teki wrote:
> Samsung SEC MIPI DSIM Bridge controller is MIPI DSI bridge
> available in NXP's i.MX8M Mini and Nano Processors.
> 
> Add dt-bingings for it.
> 
> Cc: Andrzej Hajda 
> Cc: Neil Armstrong 
> Cc: Robert Foss 
> Cc: Laurent Pinchart 
> Cc: Rob Herring 
> Signed-off-by: Jagan Teki 
> ---
>  .../display/bridge/samsung,sec-dsim.yaml  | 184 ++
>  1 file changed, 184 insertions(+)
>  create mode 100644 
> Documentation/devicetree/bindings/display/bridge/samsung,sec-dsim.yaml
> 
> diff --git 
> a/Documentation/devicetree/bindings/display/bridge/samsung,sec-dsim.yaml 
> b/Documentation/devicetree/bindings/display/bridge/samsung,sec-dsim.yaml
> new file mode 100644
> index ..32f67f313dfd
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/display/bridge/samsung,sec-dsim.yaml
> @@ -0,0 +1,184 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/display/bridge/samsung,sec-dsim.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Samsung SEC MIPI DSIM Bridge controller on i.MX8M Mini and Nano SoCs
> +
> +maintainers:
> +  - Jagan Teki 
> +
> +description: |
> +  NWL MIPI-DSI host controller found on i.MX8 platforms. This is a dsi 
> bridge for
> +  the SOCs NWL MIPI-DSI host controller.
> +
> +allOf:
> +  - $ref: ../dsi-controller.yaml#
> +
> +properties:
> +  compatible:
> +enum:
> +  - fsl,imx8mm-sec-dsim
> +
> +  reg:
> +maxItems: 1
> +
> +  interrupts:
> +maxItems: 1
> +
> +  '#address-cells':
> +const: 1
> +
> +  '#size-cells':
> +const: 0
> +
> +  assigned-clock-parents: true
> +  assigned-clock-rates: true
> +  assigned-clocks: true
> +
> +  clocks:
> +items:
> +  - description: DSI bus clock
> +  - description: PHY_REF clock
> +
> +  clock-names:
> +items:
> +  - const: bus
> +  - const: phy_ref
> +
> +  phys:
> +maxItems: 1
> +description: phandle to the phy module representing the DPHY
> +
> +  phy-names:
> +items:
> +  - const: dphy
> +
> +  power-domains:
> +maxItems: 1
> +description: phandle to the associated power domain
> +
> +  samsung,burst-clock-frequency:
> +$ref: /schemas/types.yaml#/definitions/uint32
> +description:
> +  DSIM high speed burst mode frequency.
> +
> +  samsung,esc-clock-frequency:
> +$ref: /schemas/types.yaml#/definitions/uint32
> +description:
> +  DSIM escape mode frequency.
> +
> +  samsung,pll-clock-frequency:
> +$ref: /schemas/types.yaml#/definitions/uint32
> +description:
> +  DSIM oscillator clock frequency.

Why do you need those three properties ? They look like configuration
information to me, not system description. If they are needed, their
description needs to explain how to set them. Looking at the three
descriptions above I have no idea what to select for those frequencies.

> +
> +  ports:
> +$ref: /schemas/graph.yaml#/properties/ports
> +
> +properties:
> +  port@0:
> +$ref: /schemas/graph.yaml#/$defs/port-base
> +description:
> +  Input port node to receive pixel data from the
> +  display controller. Exactly one endpoint must be
> +  specified.
> +properties:
> +  endpoint@0:
> +$ref: /schemas/graph.yaml#/properties/endpoint
> +description: sub-node describing the input from LCDIF
> +
> +  endpoint@1:
> +$ref: /schemas/graph.yaml#/properties/endpoint
> +description: sub-node describing the input from DCSS
> +
> +oneOf:
> +  - required:
> +  - endpoint@0
> +  - required:
> +  - endpoint@1
> +
> +unevaluatedProperties: false
> +
> +  port@1:
> +$ref: /schemas/graph.yaml#/properties/port
> +description:
> +  DSI output port node to the panel or the next bridge
> +  in the chain
> +
> +required:
> +  - port@0
> +  - port@1
> +
> +required:
> +  - '#address-cells'
> +  - '#size-cells'
> +  - clock-names
> +  - clocks
> +  - compatible
> +  - interrupts
> +  - phy-names
> +  - phys
> +  - ports
> +  - reg
> +  - samsung,burst-clock-frequency
> +  - samsung,esc-clock-frequency
> +  - samsung,pll-clock-frequency
> +
> +unevaluatedProperties: false
> +
> +examples:
> +  - |
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +dsi: dsi@32e1 {
> +  compatible = "fsl,imx8mm-sec-dsim";
> +  reg = <0x32e1 0xa0>;
> +  clocks = < IMX8MM_CLK_DSI_CORE>,
> +   < IMX8MM_CLK_DSI_PHY_REF>;
> +  clock-names = "bus", "phy_ref";
> +  assigned-clocks = < IMX8MM_CLK_DSI_CORE>,
> +< IMX8MM_VIDEO_PLL1_OUT>,
> +< IMX8MM_CLK_DSI_PHY_REF>;
> +  assigned-clock-parents = < IMX8MM_SYS_PLL1_266M>,
> +   

Re: [PATCH v3 1/2] habanalabs: define uAPI to export FD for DMA-BUF

2021-06-21 Thread Jason Gunthorpe
On Mon, Jun 21, 2021 at 07:26:14PM +0300, Oded Gabbay wrote:
> On Mon, Jun 21, 2021 at 5:12 PM Jason Gunthorpe  wrote:
> >
> > On Mon, Jun 21, 2021 at 03:02:10PM +0200, Greg KH wrote:
> > > On Mon, Jun 21, 2021 at 02:28:48PM +0200, Daniel Vetter wrote:
> >
> > > > Also I'm wondering which is the other driver that we share buffers
> > > > with. The gaudi stuff doesn't have real struct pages as backing
> > > > storage, it only fills out the dma_addr_t. That tends to blow up with
> > > > other drivers, and the only place where this is guaranteed to work is
> > > > if you have a dynamic importer which sets the allow_peer2peer flag.
> > > > Adding maintainers from other subsystems who might want to chime in
> > > > here. So even aside of the big question as-is this is broken.
> > >
> > > From what I can tell this driver is sending the buffers to other
> > > instances of the same hardware,
> >
> > A dmabuf is consumed by something else in the kernel calling
> > dma_buf_map_attachment() on the FD.
> >
> > What is the other side of this? I don't see any
> > dma_buf_map_attachment() calls in drivers/misc, or added in this patch
> > set.
> 
> This patch-set is only to enable the support for the exporter side.
> The "other side" is any generic RDMA networking device that will want
> to perform p2p communication over PCIe with our GAUDI accelerator.
> An example is indeed the mlnx5 card which has already integrated
> support for being an "importer".

It raises the question of how you are testing this if you aren't using
it with the only intree driver: mlx5.

Jason


Re: [RFC PATCH 1/9] dt-bindings: display: bridge: Add Samsung SEC MIPI DSIM bindings

2021-06-21 Thread Rob Herring
On Mon, 21 Jun 2021 12:54:16 +0530, Jagan Teki wrote:
> Samsung SEC MIPI DSIM Bridge controller is MIPI DSI bridge
> available in NXP's i.MX8M Mini and Nano Processors.
> 
> Add dt-bingings for it.
> 
> Cc: Andrzej Hajda 
> Cc: Neil Armstrong 
> Cc: Robert Foss 
> Cc: Laurent Pinchart 
> Cc: Rob Herring 
> Signed-off-by: Jagan Teki 
> ---
>  .../display/bridge/samsung,sec-dsim.yaml  | 184 ++
>  1 file changed, 184 insertions(+)
>  create mode 100644 
> Documentation/devicetree/bindings/display/bridge/samsung,sec-dsim.yaml
> 

My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check'
on your patch (DT_CHECKER_FLAGS is new in v5.13):

yamllint warnings/errors:

dtschema/dtc warnings/errors:
Documentation/devicetree/bindings/display/bridge/samsung,sec-dsim.example.dts:20:18:
 fatal error: dt-bindings/power/imx8mm-power.h: No such file or directory
   20 | #include 
  |  ^~
compilation terminated.
make[1]: *** [scripts/Makefile.lib:380: 
Documentation/devicetree/bindings/display/bridge/samsung,sec-dsim.example.dt.yaml]
 Error 1
make[1]: *** Waiting for unfinished jobs
make: *** [Makefile:1416: dt_binding_check] Error 2
\ndoc reference errors (make refcheckdocs):

See https://patchwork.ozlabs.org/patch/1494924

This check can fail if there are any dependencies. The base for a patch
series is generally the most recent rc1.

If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:

pip3 install dtschema --upgrade

Please check and re-submit.



Re: [RFC PATCH 3/9] dt-bindings: phy: Add SEC DSIM DPHY bindings

2021-06-21 Thread Rob Herring
On Mon, 21 Jun 2021 12:54:18 +0530, Jagan Teki wrote:
> Samsung SEC MIPI DSIM DPHY controller is part of registers
> available in SEC MIPI DSIM bridge for NXP's i.MX8M Mini and
> Nano Processors.
> 
> Add dt-bingings for it.
> 
> Cc: Kishon Vijay Abraham I 
> Cc: Vinod Koul 
> Cc: Rob Herring 
> Signed-off-by: Jagan Teki 
> ---
>  .../bindings/phy/samsung,sec-dsim-dphy.yaml   | 56 +++
>  1 file changed, 56 insertions(+)
>  create mode 100644 
> Documentation/devicetree/bindings/phy/samsung,sec-dsim-dphy.yaml
> 

My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check'
on your patch (DT_CHECKER_FLAGS is new in v5.13):

yamllint warnings/errors:

dtschema/dtc warnings/errors:
Documentation/devicetree/bindings/phy/samsung,sec-dsim-dphy.example.dts:20:18: 
fatal error: dt-bindings/power/imx8mm-power.h: No such file or directory
   20 | #include 
  |  ^~
compilation terminated.
make[1]: *** [scripts/Makefile.lib:380: 
Documentation/devicetree/bindings/phy/samsung,sec-dsim-dphy.example.dt.yaml] 
Error 1
make[1]: *** Waiting for unfinished jobs
make: *** [Makefile:1416: dt_binding_check] Error 2
\ndoc reference errors (make refcheckdocs):

See https://patchwork.ozlabs.org/patch/1494925

This check can fail if there are any dependencies. The base for a patch
series is generally the most recent rc1.

If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:

pip3 install dtschema --upgrade

Please check and re-submit.



Re: [PATH 3/4] dt-bindings: display: Add virtual DRM

2021-06-21 Thread Rob Herring
On Mon, 21 Jun 2021 15:44:02 +0900, Tomohito Esaki wrote:
> Add device tree bindings documentation for virtual DRM.
> 
> Signed-off-by: Tomohito Esaki 
> ---
>  .../devicetree/bindings/display/vdrm.yaml | 67 +++
>  1 file changed, 67 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/display/vdrm.yaml
> 

My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check'
on your patch (DT_CHECKER_FLAGS is new in v5.13):

yamllint warnings/errors:
./Documentation/devicetree/bindings/display/vdrm.yaml:39:1: [error] syntax 
error: found character '\t' that cannot start any token (syntax)

dtschema/dtc warnings/errors:
make[1]: *** Deleting file 
'Documentation/devicetree/bindings/display/vdrm.example.dts'
Traceback (most recent call last):
  File "/usr/local/bin/dt-extract-example", line 45, in 
binding = yaml.load(open(args.yamlfile, encoding='utf-8').read())
  File "/usr/local/lib/python3.8/dist-packages/ruamel/yaml/main.py", line 434, 
in load
return constructor.get_single_data()
  File "/usr/local/lib/python3.8/dist-packages/ruamel/yaml/constructor.py", 
line 120, in get_single_data
node = self.composer.get_single_node()
  File "_ruamel_yaml.pyx", line 706, in _ruamel_yaml.CParser.get_single_node
  File "_ruamel_yaml.pyx", line 724, in _ruamel_yaml.CParser._compose_document
  File "_ruamel_yaml.pyx", line 775, in _ruamel_yaml.CParser._compose_node
  File "_ruamel_yaml.pyx", line 889, in 
_ruamel_yaml.CParser._compose_mapping_node
  File "_ruamel_yaml.pyx", line 775, in _ruamel_yaml.CParser._compose_node
  File "_ruamel_yaml.pyx", line 889, in 
_ruamel_yaml.CParser._compose_mapping_node
  File "_ruamel_yaml.pyx", line 775, in _ruamel_yaml.CParser._compose_node
  File "_ruamel_yaml.pyx", line 889, in 
_ruamel_yaml.CParser._compose_mapping_node
  File "_ruamel_yaml.pyx", line 775, in _ruamel_yaml.CParser._compose_node
  File "_ruamel_yaml.pyx", line 889, in 
_ruamel_yaml.CParser._compose_mapping_node
  File "_ruamel_yaml.pyx", line 775, in _ruamel_yaml.CParser._compose_node
  File "_ruamel_yaml.pyx", line 889, in 
_ruamel_yaml.CParser._compose_mapping_node
  File "_ruamel_yaml.pyx", line 731, in _ruamel_yaml.CParser._compose_node
  File "_ruamel_yaml.pyx", line 904, in _ruamel_yaml.CParser._parse_next_event
ruamel.yaml.scanner.ScannerError: while scanning a plain scalar
  in "", line 38, column 15
found a tab character that violates indentation
  in "", line 39, column 1
make[1]: *** [Documentation/devicetree/bindings/Makefile:20: 
Documentation/devicetree/bindings/display/vdrm.example.dts] Error 1
make[1]: *** Waiting for unfinished jobs
./Documentation/devicetree/bindings/display/vdrm.yaml:  while scanning a plain 
scalar
  in "", line 38, column 15
found a tab character that violates indentation
  in "", line 39, column 1
/builds/robherring/linux-dt-review/Documentation/devicetree/bindings/display/vdrm.yaml:
 ignoring, error parsing file
warning: no schema found in file: 
./Documentation/devicetree/bindings/display/vdrm.yaml
make: *** [Makefile:1416: dt_binding_check] Error 2
\ndoc reference errors (make refcheckdocs):

See https://patchwork.ozlabs.org/patch/1494913

This check can fail if there are any dependencies. The base for a patch
series is generally the most recent rc1.

If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:

pip3 install dtschema --upgrade

Please check and re-submit.



Re: [PATCH] drm/vc4: dsi: Only register our component once a DSI device is attached

2021-06-21 Thread Jagan Teki
Hi Laurent,

On Mon, Jun 21, 2021 at 7:44 PM Laurent Pinchart
 wrote:
>
> Hi Jagan,
>
> On Mon, Jun 21, 2021 at 07:41:07PM +0530, Jagan Teki wrote:
> > On Mon, Jun 21, 2021 at 6:26 PM Laurent Pinchart wrote:
> > > On Mon, Jun 21, 2021 at 12:49:14PM +0100, Dave Stevenson wrote:
> > > > On Sun, 20 Jun 2021 at 23:49, Laurent Pinchart wrote:
> > > > > On Sun, Jun 20, 2021 at 09:42:27PM +0300, Laurent Pinchart wrote:
> > > > > > On Sun, Jun 20, 2021 at 03:29:03PM +0100, Dave Stevenson wrote:
> > > > > > > On Sun, 20 Jun 2021 at 04:26, Laurent Pinchart wrote:
> > > > > > > >
> > > > > > > > Hi Maxime,
> > > > > > > >
> > > > > > > > I'm testing this, and I'm afraid it causes an issue with all the
> > > > > > > > I2C-controlled bridges. I'm focussing on the newly merged 
> > > > > > > > ti-sn65dsi83
> > > > > > > > driver at the moment, but other are affected the same way.
> > > > > > > >
> > > > > > > > With this patch, the DSI component is only added when the DSI 
> > > > > > > > device is
> > > > > > > > attached to the host with mipi_dsi_attach(). In the 
> > > > > > > > ti-sn65dsi83 driver,
> > > > > > > > this happens in the bridge attach callback, which is called 
> > > > > > > > when the
> > > > > > > > bridge is attached by a call to drm_bridge_attach() in 
> > > > > > > > vc4_dsi_bind().
> > > > > > > > This creates a circular dependency, and the DRM/KMS device is 
> > > > > > > > never
> > > > > > > > created.
> > > > > > > >
> > > > > > > > How should this be solved ? Dave, I think you have shown an 
> > > > > > > > interest in
> > > > > > > > the sn65dsi83 recently, any help would be appreciated. On a 
> > > > > > > > side note,
> > > > > > > > I've tested the ti-sn65dsi83 driver on a v5.10 RPi kernel, 
> > > > > > > > without much
> > > > > > > > success (on top of commit e1499baa0b0c I get a very weird frame 
> > > > > > > > rate -
> > > > > > > > 147 fps of 99 fps instead of 60 fps - and nothing on the 
> > > > > > > > screen, and on
> > > > > > > > top of the latest v5.10 RPi branch, I get lock-related warnings 
> > > > > > > > at every
> > > > > > > > page flip), which is why I tried v5.12 and noticed this patch. 
> > > > > > > > Is it
> > > > > > > > worth trying to bring up the display on the v5.10 RPi kernel in 
> > > > > > > > parallel
> > > > > > > > to fixing the issue introduced in this patch, or is DSI known 
> > > > > > > > to be
> > > > > > > > broken there ?
> > > > > > >
> > > > > > > I've been looking at SN65DSI83/4, but as I don't have any hardware
> > > > > > > I've largely been suggesting things to try to those on the forums 
> > > > > > > who
> > > > > > > do [1].
> > > > > > >
> > > > > > > My branch at 
> > > > > > > https://github.com/6by9/linux/tree/rpi-5.10.y-sn65dsi8x-marek
> > > > > > > is the latest one I've worked on. It's rpi-5.10.y with Marek's 
> > > > > > > driver
> > > > > > > cherry-picked, and an overlay and simple-panel definition by 
> > > > > > > others.
> > > > > > > It also has a rework for vc4_dsi to use pm_runtime, instead of
> > > > > > > breaking up the DSI bridge chain (which is flawed as it never 
> > > > > > > calls
> > > > > > > the bridge mode_set or mode_valid functions which sn65dsi83 relies
> > > > > > > on).
> > > > > > >
> > > > > > > I ran it on Friday in the lab and encountered an issue with 
> > > > > > > vc4_dsi
> > > > > > > should vc4_dsi_encoder_mode_fixup wish for a divider of 7 
> > > > > > > (required
> > > > > > > for this 800x1280 panel over 4 lanes) where it resulted in an 
> > > > > > > invalid
> > > > > > > mode configuration. That resulted in patch [2] which then gave me
> > > > > > > sensible numbers.
> > > > > > >
> > > > > > > That branch with dtoverlay=vc4-kms-v3d and
> > > > > > > dtoverlay=vc4-kms-dsi-ti-sn65dsi83 created all the expected 
> > > > > > > devices,
> > > > > > > and everything came up normally.
> > > > > > > It was a busy day, but I think I even stuck a scope on the clock 
> > > > > > > lanes
> > > > > > > at that point and confirmed that they were at the link frequency
> > > > > > > expected.
> > > > > >
> > > > > > Thanks, I'll test your branch and will report the results.
> > > > >
> > > > > I had to apply the following diff to work around a crash:
> > > > >
> > > > > diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi83.c 
> > > > > b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
> > > > > index 55b6c53207f5..647426aa793a 100644
> > > > > --- a/drivers/gpu/drm/bridge/ti-sn65dsi83.c
> > > > > +++ b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
> > > > > @@ -525,6 +525,9 @@ static bool sn65dsi83_mode_fixup(struct 
> > > > > drm_bridge *bridge,
> > > > >
> > > > > /* The DSI format is always RGB888_1X24 */
> > > > > list_for_each_entry(connector, 
> > > > > >mode_config.connector_list, head) {
> > > > > +   if (!connector->display_info.bus_formats)
> > > > > +   continue;
> > > > > +
> > > > > switch (connector->display_info.bus_formats[0]) {
> > > > >  

Re: [PATCH] drm/bridge: ti-sn65dsi83: Replace connector format patching with atomic_get_input_bus_fmts

2021-06-21 Thread Robert Foss
>
> Perfect :-)
>
> Reviewed-by: Laurent Pinchart 
>

Pulled into drm-misc-next.

https://cgit.freedesktop.org/drm/drm-misc/commit/?id=db8b7ca5b232083c82f627af7fe653d8074c5ca0


Re: [PATCH] drm: mxsfb: Increase number of outstanding requests on V4 and newer HW

2021-06-21 Thread Marek Vasut

On 6/21/21 2:08 PM, Lucas Stach wrote:

Am Montag, dem 21.06.2021 um 00:47 +0200 schrieb Marek Vasut:

In case the DRAM is under high load, the MXSFB FIFO might underflow
and that causes visible artifacts. This could be triggered on i.MX8MM
using e.g. "$ memtester 128M" on a device with 1920x1080 panel. The
first "Stuck Address" test of the memtester will completely corrupt
the image on the panel and leave the MXSFB FIFO in odd state.

To avoid this underflow, increase number of outstanding requests to
DRAM from 2 to 16, which is the maximum. This mitigates the issue
and it can no longer be triggered.


I see the obvious benefit of this change, but I'm not sure if enabling
this on older SoCs is without any drawbacks. The newer SoCs have a good
transaction scheduling on the NOC (i.MX8M) or at least a somewhat okay
one in the DRAM controller (i.MX6). I'm not so sure about the older
SoCs, where I could imagine too many outstanding transactions to delay
memory traffic for other masters on the SoC.

You don't happen to have any experience with this on the older SoCs, do
you?


The only older SoC which would be affected by this, except for the ones 
you already listed, is MX28. And the MX28 has rather decent DRAM 
controller, so I wouldn't expect problems there either.


You can look at it the other way around too however, if the DRAM gets 
saturated, the LCDIF controller suffers from FIFO underflows and that 
completely messes up the FIFO state, at which point the image on the 
display is distorted, shifted, wrapped around, or any other such odd 
effect. The underflow auto-recovery bit helps with it, but with this 
patch in place you are unlikely to run into those effects at all.


Re: [PATCH] drm: mxsfb: Clear FIFO_CLEAR bit

2021-06-21 Thread Marek Vasut

On 6/21/21 2:14 PM, Lucas Stach wrote:

[...]


diff --git a/drivers/gpu/drm/mxsfb/mxsfb_kms.c 
b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
index 98d8ba0bae84..22cb749fc9bc 100644
--- a/drivers/gpu/drm/mxsfb/mxsfb_kms.c
+++ b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
@@ -241,6 +241,9 @@ static void mxsfb_crtc_mode_set_nofb(struct 
mxsfb_drm_private *mxsfb,
  
  	/* Clear the FIFOs */

writel(CTRL1_FIFO_CLEAR, mxsfb->base + LCDC_CTRL1 + REG_SET);
+   readl(mxsfb->base + LCDC_CTRL1);


Do you really need those readbacks? As both writes are targeting the
same slave interface, the memory barrier in the clear write should push
the set write.


What would push the clear write then ? We can drop one of the readl()s, 
but not the last one.


Re: [PATCH] drm: mxsfb: Disable overlay plane support for i.MX8MM/i.MX8MN

2021-06-21 Thread Marek Vasut

On 6/21/21 2:10 PM, Lucas Stach wrote:

Am Montag, dem 21.06.2021 um 00:48 +0200 schrieb Marek Vasut:

The iMX8MM and iMX8MN do not support the overlay plane, so they
are MXSFB V4. Add the compatible strings to reflect this. Note
that iMX8MQ does support the overlay plane, so it is MXSFB V6.


Do we need this compatible in the driver? If there are no other changes
known at this time, shouldn't we be able to just specify "fsl,imx28-
lcdif" as the fallback compatible in the DT, besides the imx8m*
specific ones?


Yes, we should discard this patch and use the fallback compatible, 
although it is quite counter-intuitive.


Re: [PATCH v3 1/2] habanalabs: define uAPI to export FD for DMA-BUF

2021-06-21 Thread Oded Gabbay
On Mon, Jun 21, 2021 at 5:12 PM Jason Gunthorpe  wrote:
>
> On Mon, Jun 21, 2021 at 03:02:10PM +0200, Greg KH wrote:
> > On Mon, Jun 21, 2021 at 02:28:48PM +0200, Daniel Vetter wrote:
>
> > > Also I'm wondering which is the other driver that we share buffers
> > > with. The gaudi stuff doesn't have real struct pages as backing
> > > storage, it only fills out the dma_addr_t. That tends to blow up with
> > > other drivers, and the only place where this is guaranteed to work is
> > > if you have a dynamic importer which sets the allow_peer2peer flag.
> > > Adding maintainers from other subsystems who might want to chime in
> > > here. So even aside of the big question as-is this is broken.
> >
> > From what I can tell this driver is sending the buffers to other
> > instances of the same hardware,
>
> A dmabuf is consumed by something else in the kernel calling
> dma_buf_map_attachment() on the FD.
>
> What is the other side of this? I don't see any
> dma_buf_map_attachment() calls in drivers/misc, or added in this patch
> set.

This patch-set is only to enable the support for the exporter side.
The "other side" is any generic RDMA networking device that will want
to perform p2p communication over PCIe with our GAUDI accelerator.
An example is indeed the mlnx5 card which has already integrated
support for being an "importer".

This is *not* used for communication with another GAUDI device. If I
want to communicate with another GAUDI device, our userspace
communications library will use our internal network links, without
any need for dma-buf.

Oded

>
> AFAIK the only viable in-tree other side is in mlx5 (look in
> umem_dmabuf.c)
>
> Though as we already talked habana has their own networking (out of
> tree, presumably) so I suspect this is really to support some out of
> tree stuff??
>
> Jason


Re: [PATCH] drm/vc4: dsi: Only register our component once a DSI device is attached

2021-06-21 Thread Dave Stevenson
Hi Maxime

On Mon, 21 Jun 2021 at 17:05, Maxime Ripard  wrote:
>
> Hi Laurent,
>
> On Sun, Jun 20, 2021 at 04:44:33AM +0300, Laurent Pinchart wrote:
> > Hi Maxime,
> >
> > I'm testing this, and I'm afraid it causes an issue with all the
> > I2C-controlled bridges. I'm focussing on the newly merged ti-sn65dsi83
> > driver at the moment, but other are affected the same way.
> >
> > With this patch, the DSI component is only added when the DSI device is
> > attached to the host with mipi_dsi_attach(). In the ti-sn65dsi83 driver,
> > this happens in the bridge attach callback, which is called when the
> > bridge is attached by a call to drm_bridge_attach() in vc4_dsi_bind().
> > This creates a circular dependency, and the DRM/KMS device is never
> > created.
>
> We discussed it on IRC, but it makes more sense here.
>
> The thing is, that patch is fixing a circular dependency we discussed
> with Andrzej a year ago:
>
> https://lore.kernel.org/dri-devel/20200630132711.ezywhvoiuv3sw...@gilmour.lan/
>
> It seems like we have to choose between having the panels or bridges
> working :/

The Pi panel using the panel-raspberrypi-touchscreen driver is flawed
as it controls the power to the FT5406 touchscreen element as well as
the display. If DRM powers down the display, power goes to the
touchscreen too, but the edt-ft5x06 touchscreen driver has no notion
of this :-(

The two parts have been broken into bridge/tc358762 and
regulator/rpi-panel-attiny-regulator which then allows the edt-ft5x06
driver to keep control over power. I haven't had it be 100% reliable
though, so I'm still investigating as time allows, but this seems like
the better solution than panel-raspberrypi-touchscreen.

With the tc358762 node back under the DSI host node, I think that
circular dependency you were trying to solve goes away.
However with sn65dsi83 being I2C configured, is that an issue again?

  Dave


Re: [PATCH] dma-buf: Document non-dynamic exporter expectations better

2021-06-21 Thread Daniel Vetter
On Mon, Jun 21, 2021 at 05:53:46PM +0200, Christian König wrote:
> Am 21.06.21 um 17:17 schrieb Daniel Vetter:
> > Christian and me realized we have a pretty massive disconnect about
> > different interpretations of what dma_resv is used for by different
> > drivers. The discussion is much, much bigger than this change here,
> > but this is an important one:
> > 
> > Non-dynamic exporters must guarantee that the memory they return is
> > ready for use. They cannot expect importers to wait for the exclusive
> > fence. Only dynamic importers are required to obey the dma_resv fences
> > strictly (and more patches are needed to define exactly what this
> > means).
> > 
> > Christian has patches to update nouvea, radeon and amdgpu. The only
> > other driver using both ttm and supporting dma-buf export is qxl,
> > which only uses synchronous ttm_bo_move.
> > 
> > v2: To hammer this in document that dynamic importers _must_ wait for
> > the exclusive fence after having called dma_buf_map_attachment.
> > 
> > Cc: Christian König 
> > Signed-off-by: Daniel Vetter 
> 
> Reviewed-by: Christian König 

Applied to drm-misc-next, thanks for taking a look. Maybe when you merge
the actual bugfixes link to this patch as an explanation in each commit
message:

References: 
https://lore.kernel.org/dri-devel/20210621151758.2347474-1-daniel.vet...@ffwll.ch/

That helps a bit with your rather terse commit messages.
-Daniel

> 
> > ---
> >   drivers/dma-buf/dma-buf.c |  3 +++
> >   include/linux/dma-buf.h   | 15 +++
> >   2 files changed, 18 insertions(+)
> > 
> > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> > index e3ba5db5f292..65cbd7f0f16a 100644
> > --- a/drivers/dma-buf/dma-buf.c
> > +++ b/drivers/dma-buf/dma-buf.c
> > @@ -956,6 +956,9 @@ EXPORT_SYMBOL_GPL(dma_buf_unpin);
> >* the underlying backing storage is pinned for as long as a mapping 
> > exists,
> >* therefore users/importers should not hold onto a mapping for undue 
> > amounts of
> >* time.
> > + *
> > + * Important: Dynamic importers must wait for the exclusive fence of the 
> > struct
> > + * dma_resv attached to the DMA-BUF first.
> >*/
> >   struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
> > enum dma_data_direction direction)
> > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> > index 342585bd6dff..92eec38a03aa 100644
> > --- a/include/linux/dma-buf.h
> > +++ b/include/linux/dma-buf.h
> > @@ -96,6 +96,12 @@ struct dma_buf_ops {
> >  * This is called automatically for non-dynamic importers from
> >  * dma_buf_attach().
> >  *
> > +* Note that similar to non-dynamic exporters in their @map_dma_buf
> > +* callback the driver must guarantee that the memory is available for
> > +* use and cleared of any old data by the time this function returns.
> > +* Drivers which pipeline their buffer moves internally must wait for
> > +* all moves and clears to complete.
> > +*
> >  * Returns:
> >  *
> >  * 0 on success, negative error code on failure.
> > @@ -144,6 +150,15 @@ struct dma_buf_ops {
> >  * This is always called with the dmabuf->resv object locked when
> >  * the dynamic_mapping flag is true.
> >  *
> > +* Note that for non-dynamic exporters the driver must guarantee that
> > +* that the memory is available for use and cleared of any old data by
> > +* the time this function returns.  Drivers which pipeline their buffer
> > +* moves internally must wait for all moves and clears to complete.
> > +* Dynamic exporters do not need to follow this rule: For non-dynamic
> > +* importers the buffer is already pinned through @pin, which has the
> > +* same requirements. Dynamic importers otoh are required to obey the
> > +* dma_resv fences.
> > +*
> >  * Returns:
> >  *
> >  * A _table scatter list of or the backing storage of the DMA buffer,
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


Re: [PATCH v2 1/2] drm/panfrost: Use a threaded IRQ for job interrupts

2021-06-21 Thread Steven Price
On 21/06/2021 15:02, Boris Brezillon wrote:
> This should avoid uneccessary interrupt-context switches when the GPU
> is passed a lot of short jobs.

LGTM:

Reviewed-by: Steven Price 

> 
> v2:
> * New patch
> 
> Signed-off-by: Boris Brezillon 
> ---
>  drivers/gpu/drm/panfrost/panfrost_job.c | 54 +
>  1 file changed, 38 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c 
> b/drivers/gpu/drm/panfrost/panfrost_job.c
> index cf6abe0fdf47..1b5c636794a1 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -473,19 +473,12 @@ static const struct drm_sched_backend_ops 
> panfrost_sched_ops = {
>   .free_job = panfrost_job_free
>  };
>  
> -static irqreturn_t panfrost_job_irq_handler(int irq, void *data)
> +static void panfrost_job_handle_irq(struct panfrost_device *pfdev, u32 
> status)
>  {
> - struct panfrost_device *pfdev = data;
> - u32 status = job_read(pfdev, JOB_INT_STAT);
>   int j;
>  
>   dev_dbg(pfdev->dev, "jobslot irq status=%x\n", status);
>  
> - if (!status)
> - return IRQ_NONE;
> -
> - pm_runtime_mark_last_busy(pfdev->dev);
> -
>   for (j = 0; status; j++) {
>   u32 mask = MK_JS_MASK(j);
>  
> @@ -558,16 +551,43 @@ static irqreturn_t panfrost_job_irq_handler(int irq, 
> void *data)
>  
>   status &= ~mask;
>   }
> +}
>  
> +static irqreturn_t panfrost_job_irq_handler_thread(int irq, void *data)
> +{
> + struct panfrost_device *pfdev = data;
> + u32 status = job_read(pfdev, JOB_INT_RAWSTAT);
> +
> + while (status) {
> + pm_runtime_mark_last_busy(pfdev->dev);
> +
> + spin_lock(>js->job_lock);
> + panfrost_job_handle_irq(pfdev, status);
> + spin_unlock(>js->job_lock);
> + status = job_read(pfdev, JOB_INT_RAWSTAT);
> + }
> +
> + job_write(pfdev, JOB_INT_MASK, ~0);
>   return IRQ_HANDLED;
>  }
>  
> +static irqreturn_t panfrost_job_irq_handler(int irq, void *data)
> +{
> + struct panfrost_device *pfdev = data;
> + u32 status = job_read(pfdev, JOB_INT_STAT);
> +
> + if (!status)
> + return IRQ_NONE;
> +
> + job_write(pfdev, JOB_INT_MASK, 0);
> + return IRQ_WAKE_THREAD;
> +}
> +
>  static void panfrost_reset(struct work_struct *work)
>  {
>   struct panfrost_device *pfdev = container_of(work,
>struct panfrost_device,
>reset.work);
> - unsigned long flags;
>   unsigned int i;
>  
>   for (i = 0; i < NUM_JOB_SLOTS; i++) {
> @@ -595,7 +615,7 @@ static void panfrost_reset(struct work_struct *work)
>   /* All timers have been stopped, we can safely reset the pending state. 
> */
>   atomic_set(>reset.pending, 0);
>  
> - spin_lock_irqsave(>js->job_lock, flags);
> + spin_lock(>js->job_lock);
>   for (i = 0; i < NUM_JOB_SLOTS; i++) {
>   if (pfdev->jobs[i]) {
>   pm_runtime_put_noidle(pfdev->dev);
> @@ -603,7 +623,7 @@ static void panfrost_reset(struct work_struct *work)
>   pfdev->jobs[i] = NULL;
>   }
>   }
> - spin_unlock_irqrestore(>js->job_lock, flags);
> + spin_unlock(>js->job_lock);
>  
>   panfrost_device_reset(pfdev);
>  
> @@ -628,8 +648,11 @@ int panfrost_job_init(struct panfrost_device *pfdev)
>   if (irq <= 0)
>   return -ENODEV;
>  
> - ret = devm_request_irq(pfdev->dev, irq, panfrost_job_irq_handler,
> -IRQF_SHARED, KBUILD_MODNAME "-job", pfdev);
> + ret = devm_request_threaded_irq(pfdev->dev, irq,
> + panfrost_job_irq_handler,
> + panfrost_job_irq_handler_thread,
> + IRQF_SHARED, KBUILD_MODNAME "-job",
> + pfdev);
>   if (ret) {
>   dev_err(pfdev->dev, "failed to request job irq");
>   return ret;
> @@ -696,14 +719,13 @@ int panfrost_job_open(struct panfrost_file_priv 
> *panfrost_priv)
>  void panfrost_job_close(struct panfrost_file_priv *panfrost_priv)
>  {
>   struct panfrost_device *pfdev = panfrost_priv->pfdev;
> - unsigned long flags;
>   int i;
>  
>   for (i = 0; i < NUM_JOB_SLOTS; i++)
>   drm_sched_entity_destroy(_priv->sched_entity[i]);
>  
>   /* Kill in-flight jobs */
> - spin_lock_irqsave(>js->job_lock, flags);
> + spin_lock(>js->job_lock);
>   for (i = 0; i < NUM_JOB_SLOTS; i++) {
>   struct drm_sched_entity *entity = 
> _priv->sched_entity[i];
>   struct panfrost_job *job = pfdev->jobs[i];
> @@ -713,7 +735,7 @@ void panfrost_job_close(struct panfrost_file_priv 
> *panfrost_priv)
>  
>   job_write(pfdev, JS_COMMAND(i), JS_COMMAND_HARD_STOP);
>   }
> - 

Re: [PATCH v2 2/2] drm/panfrost: Queue jobs on the hardware

2021-06-21 Thread Steven Price
On 21/06/2021 15:02, Boris Brezillon wrote:
> From: Steven Price 
> 
> The hardware has a set of '_NEXT' registers that can hold a second job
> while the first is executing. Make use of these registers to enqueue a
> second job per slot.
> 
> v2:
> * Make sure non-faulty jobs get properly paused/resumed on GPU reset
> 
> Signed-off-by: Steven Price 
> Signed-off-by: Boris Brezillon 
> ---
>  drivers/gpu/drm/panfrost/panfrost_device.h |   2 +-
>  drivers/gpu/drm/panfrost/panfrost_job.c| 311 -
>  2 files changed, 242 insertions(+), 71 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h 
> b/drivers/gpu/drm/panfrost/panfrost_device.h
> index 95e6044008d2..a87917b9e714 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_device.h
> +++ b/drivers/gpu/drm/panfrost/panfrost_device.h
> @@ -101,7 +101,7 @@ struct panfrost_device {
>  
>   struct panfrost_job_slot *js;
>  
> - struct panfrost_job *jobs[NUM_JOB_SLOTS];
> + struct panfrost_job *jobs[NUM_JOB_SLOTS][2];
>   struct list_head scheduled_jobs;
>  
>   struct panfrost_perfcnt *perfcnt;
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c 
> b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 1b5c636794a1..888eceed227f 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -4,6 +4,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -41,6 +42,7 @@ struct panfrost_queue_state {
>  };
>  
>  struct panfrost_job_slot {
> + int irq;
>   struct panfrost_queue_state queue[NUM_JOB_SLOTS];
>   spinlock_t job_lock;
>  };
> @@ -148,9 +150,43 @@ static void panfrost_job_write_affinity(struct 
> panfrost_device *pfdev,
>   job_write(pfdev, JS_AFFINITY_NEXT_HI(js), affinity >> 32);
>  }
>  
> +static struct panfrost_job *
> +panfrost_dequeue_job(struct panfrost_device *pfdev, int slot)
> +{
> + struct panfrost_job *job = pfdev->jobs[slot][0];
> +
> + pfdev->jobs[slot][0] = pfdev->jobs[slot][1];
> + pfdev->jobs[slot][1] = NULL;
> +
> + return job;
> +}
> +
> +static unsigned int
> +panfrost_enqueue_job(struct panfrost_device *pfdev, int slot,
> +  struct panfrost_job *job)
> +{
> + if (!pfdev->jobs[slot][0]) {
> + pfdev->jobs[slot][0] = job;
> + return 0;
> + }
> +
> + WARN_ON(pfdev->jobs[slot][1]);
> + pfdev->jobs[slot][1] = job;
> + return 1;
> +}
> +
> +static u32
> +panfrost_get_job_chain_flag(const struct panfrost_job *job)
> +{
> + struct panfrost_fence *f = to_panfrost_fence(job->done_fence);
> +
> + return (f->seqno & 1) ? JS_CONFIG_JOB_CHAIN_FLAG : 0;

Is the seqno going to reliably toggle like this? We need to ensure that
when there are two jobs on the hardware they have different "job chain
disambiguation" flags.

Also that feature was only introduced in t76x. So relying on that would
sadly kill off support for t60x, t62x and t72x (albeit I'm not sure how
'supported' these are with Mesa anyway).

It is possible to implement without the disambiguation flag - but it's
a bit fiddly: it requires clearing out the _NEXT register, checking that
you actually cleared it successfully (i.e. the hardware didn't just
start the job before you cleared it) and then doing the action if still
necessary. And of course then recovering from having cleared out _NEXT.
There's a reason for adding the feature! ;)

I'll try to review the rest and give it a spin later - although it's of
course it looks quite familiar ;)

Steve

> +}
> +
>  static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
>  {
>   struct panfrost_device *pfdev = job->pfdev;
> + unsigned int subslot;
>   u32 cfg;
>   u64 jc_head = job->jc;
>   int ret;
> @@ -176,7 +212,8 @@ static void panfrost_job_hw_submit(struct panfrost_job 
> *job, int js)
>* start */
>   cfg |= JS_CONFIG_THREAD_PRI(8) |
>   JS_CONFIG_START_FLUSH_CLEAN_INVALIDATE |
> - JS_CONFIG_END_FLUSH_CLEAN_INVALIDATE;
> + JS_CONFIG_END_FLUSH_CLEAN_INVALIDATE |
> + panfrost_get_job_chain_flag(job);
>  
>   if (panfrost_has_hw_feature(pfdev, HW_FEATURE_FLUSH_REDUCTION))
>   cfg |= JS_CONFIG_ENABLE_FLUSH_REDUCTION;
> @@ -190,10 +227,17 @@ static void panfrost_job_hw_submit(struct panfrost_job 
> *job, int js)
>   job_write(pfdev, JS_FLUSH_ID_NEXT(js), job->flush_id);
>  
>   /* GO ! */
> - dev_dbg(pfdev->dev, "JS: Submitting atom %p to js[%d] with head=0x%llx",
> - job, js, jc_head);
>  
> - job_write(pfdev, JS_COMMAND_NEXT(js), JS_COMMAND_START);
> + spin_lock(>js->job_lock);
> + subslot = panfrost_enqueue_job(pfdev, js, job);
> + /* Don't queue the job if a reset is in progress */
> + if (!atomic_read(>reset.pending)) {
> + job_write(pfdev, JS_COMMAND_NEXT(js), JS_COMMAND_START);
> + 

Re: [PATCH] drm/vc4: dsi: Only register our component once a DSI device is attached

2021-06-21 Thread Maxime Ripard
Hi,

On Mon, Jun 21, 2021 at 01:48:33AM +0300, Laurent Pinchart wrote:
> Then, when running kmstest --flip, I get one warning per frame:
> 
> [   29.762089] [drm:vc4_dsi_runtime_resume] *ERROR* vc4_dsi_runtime_resume:
> [   29.763200] [drm:vc4_dsi_runtime_resume] *ERROR* vc4_dsi_runtime_resume: 
> All good
> [   29.793861] [ cut here ]
> [   29.798572] WARNING: CPU: 2 PID: 249 at 
> drivers/gpu/drm/drm_modeset_lock.c:246 drm_modeset_lock+0xd0/0x100
> [   29.808365] Modules linked in: ipv6 bcm2835_codec(C) bcm2835_unicam 
> bcm2835_v4l2(C) bcm2835_isp(C) bcm2835_mmal_vchiq(C) v4l2_mem2mem 
> v4l2_dv_timings imx296 rtc_ds1307 videobuf2_vmallom
> [   29.855284] CPU: 2 PID: 249 Comm: kworker/u8:10 Tainted: G C   
>  5.10.44-v8+ #23
> [   29.863756] Hardware name: Raspberry Pi Compute Module 4 Rev 1.0 (DT)
> [   29.870297] Workqueue: events_unbound commit_work
> [   29.875077] pstate: 8005 (Nzcv daif -PAN -UAO -TCO BTYPE=--)
> [   29.881172] pc : drm_modeset_lock+0xd0/0x100
> [   29.885506] lr : drm_atomic_get_new_or_current_crtc_state+0x6c/0x110
> [   29.891950] sp : ffc011fcbcb0
> [   29.895308] x29: ffc011fcbcb0 x28: ff80403fe780
> [   29.900705] x27: ff80415a2000 x26: ffc0106f
> [   29.906100] x25:  x24: ff80420d3c80
> [   29.911495] x23: ff8042174080 x22: 0038
> [   29.916890] x21:  x20: ff80421740a8
> [   29.922284] x19: ffc011f8bc50 x18: 
> [   29.927678] x17:  x16: 
> [   29.933072] x15:  x14: 
> [   29.938466] x13: 00480329 x12: 0326032303290320
> [   29.943860] x11: 0320020301f4 x10: 19e0
> [   29.949255] x9 : ffc0106efd8c x8 : ff804390d5c0
> [   29.954649] x7 : 7fff x6 : 0001
> [   29.960043] x5 : 0001 x4 : 0001
> [   29.965436] x3 : ff80415a2000 x2 : ff804199b200
> [   29.970830] x1 : 00bc x0 : ffc011f8bc98
> [   29.976225] Call trace:
> [   29.978708]  drm_modeset_lock+0xd0/0x100
> [   29.982687]  drm_atomic_get_new_or_current_crtc_state+0x6c/0x110
> [   29.988781]  vc4_atomic_complete_commit+0x4e4/0x860
> [   29.993729]  commit_work+0x18/0x20
> [   29.997181]  process_one_work+0x1c4/0x4a0
> [   30.001248]  worker_thread+0x50/0x420
> [   30.004965]  kthread+0x11c/0x150
> [   30.008239]  ret_from_fork+0x10/0x20
> [   30.011865] ---[ end trace f44ae6b09cda951a ]---
> 
> Does it ring any bell ?
> 
> In case this is useful information, the problem didn't occur on top of
> commit e1499baa0b0c.

I think I have a fix here:
https://github.com/raspberrypi/linux/pull/4402

I haven't tested kmstest --flip yet though

maxime


signature.asc
Description: PGP signature


Re: [PATCH] drm/vc4: dsi: Only register our component once a DSI device is attached

2021-06-21 Thread Maxime Ripard
Hi Laurent,

On Sun, Jun 20, 2021 at 04:44:33AM +0300, Laurent Pinchart wrote:
> Hi Maxime,
> 
> I'm testing this, and I'm afraid it causes an issue with all the
> I2C-controlled bridges. I'm focussing on the newly merged ti-sn65dsi83
> driver at the moment, but other are affected the same way.
> 
> With this patch, the DSI component is only added when the DSI device is
> attached to the host with mipi_dsi_attach(). In the ti-sn65dsi83 driver,
> this happens in the bridge attach callback, which is called when the
> bridge is attached by a call to drm_bridge_attach() in vc4_dsi_bind().
> This creates a circular dependency, and the DRM/KMS device is never
> created.

We discussed it on IRC, but it makes more sense here.

The thing is, that patch is fixing a circular dependency we discussed
with Andrzej a year ago:

https://lore.kernel.org/dri-devel/20200630132711.ezywhvoiuv3sw...@gilmour.lan/

It seems like we have to choose between having the panels or bridges
working :/

Maxime


signature.asc
Description: PGP signature


Re: [PATH 3/4] dt-bindings: display: Add virtual DRM

2021-06-21 Thread Sam Ravnborg
Hi Tomohito

> +
> +description:
> +  This document defines device tree properties virtual DRM. The initial
> +  position, size and z-position of the plane used in the virtual DRM is
> +  specified.


> +  The current limitation is that these settings are applied to all crtc.
This comment (I think) refers to the actual implmentation which is
irrelevant for the binding. The implementation may refer to the binding,
but the binding must be implementation agnostic.

> +
> +properties:
> +  compatible:
> +const: virt-drm
> +
> +patternProperties:
> +  "^plane(@.*)?$":
> +description: Information of the planes used in virtual DRM
> +type: object
> +
> +properties:
> +  x:
> +type: int
This syntax looks wrong, I had expected something like:

$ref: "/schemas/types.yaml#/definitions/uint32"

> +description: x-coordinate of the left-top of the plane in pixels
> +
> +  y:
> +type: int
> +description: y-coordinate of the left-top of the plane in pixels
> +
> +  width:
> +type: int
> +description: width of the plane in pixels
> +
> +  height:
> +type: int
> + description: height of the plane in pixels
> +
> +  zpos:
> +type: int
> +description: z-position of the plane
> +
> +required:
> +  - x
> +  - y
> +  - width
> +  - height
> +  - zpos
> +
> +required:
> +  - compatible

> +  - "^plane(@.*)?$"
If there is no node to match this binding does not take effect.
So I think ^plane... do not need to be specified.

> +
> +examples:
> + - |
> +   vdrm@0 {
> +   compatible = "virt-drm";
> +   plane@0 {
> +   x = <200>;
> +y = <100>;
> +width = <800>;
> +height = <600>;
> +zpos = <1>;
> +   };
> +   };
Do not mix spaces and tabd, be consistent and use 4 spaces as indent in
all the example.

Sam


Re: [PATH 2/4] rcar-du: Add support virtual DRM device

2021-06-21 Thread Sam Ravnborg
On Mon, Jun 21, 2021 at 03:44:01PM +0900, Tomohito Esaki wrote:
> In order to use vDRM, it is necessary that the vDRM device is registered
> to du decice in the device tree.
> The "vdrms" key is added in du node and the vDRM device node is specified.
> For example:
> --
> & du {
> ...
> vdrms = <>;
> };
> --
> 
> Signed-off-by: Tomohito Esaki 
> ---
>  drivers/gpu/drm/rcar-du/Kconfig|   4 +
>  drivers/gpu/drm/rcar-du/Makefile   |   1 +
>  drivers/gpu/drm/rcar-du/rcar_du_crtc.c |  42 ++
>  drivers/gpu/drm/rcar-du/rcar_du_crtc.h |  13 ++
>  drivers/gpu/drm/rcar-du/rcar_du_drv.c  |  13 ++
>  drivers/gpu/drm/rcar-du/rcar_du_drv.h  |   3 +
>  drivers/gpu/drm/rcar-du/rcar_du_vdrm.c | 191 +
>  drivers/gpu/drm/rcar-du/rcar_du_vdrm.h |  67 +
>  drivers/gpu/drm/rcar-du/rcar_du_vsp.c  |  22 +++
>  drivers/gpu/drm/rcar-du/rcar_du_vsp.h  |   1 +
>  10 files changed, 357 insertions(+)
>  create mode 100644 drivers/gpu/drm/rcar-du/rcar_du_vdrm.c
>  create mode 100644 drivers/gpu/drm/rcar-du/rcar_du_vdrm.h
> 
> diff --git a/drivers/gpu/drm/rcar-du/Kconfig b/drivers/gpu/drm/rcar-du/Kconfig
> index b47e74421e34..6747f69c8593 100644
> --- a/drivers/gpu/drm/rcar-du/Kconfig
> +++ b/drivers/gpu/drm/rcar-du/Kconfig
> @@ -50,3 +50,7 @@ config DRM_RCAR_WRITEBACK
>   bool
>   default y if ARM64
>   depends on DRM_RCAR_DU
> +
> +config DRM_RCAR_DU_VDRM
> + tristate "Virtual DRM for R-Car DU"
> + depends on DRM_RCAR_DU && DRM_VDRM
> diff --git a/drivers/gpu/drm/rcar-du/Makefile 
> b/drivers/gpu/drm/rcar-du/Makefile
> index 4d1187ccc3e5..b589b974a9f3 100644
> --- a/drivers/gpu/drm/rcar-du/Makefile
> +++ b/drivers/gpu/drm/rcar-du/Makefile
> @@ -14,6 +14,7 @@ rcar-du-drm-$(CONFIG_DRM_RCAR_LVDS) += rcar_du_of.o \
>  rcar_du_of_lvds_r8a7796.dtb.o
>  rcar-du-drm-$(CONFIG_DRM_RCAR_VSP)   += rcar_du_vsp.o
>  rcar-du-drm-$(CONFIG_DRM_RCAR_WRITEBACK) += rcar_du_writeback.o
> +rcar-du-drm-$(CONFIG_DRM_RCAR_DU_VDRM)   += rcar_du_vdrm.o
>  
>  obj-$(CONFIG_DRM_RCAR_CMM)   += rcar_cmm.o
>  obj-$(CONFIG_DRM_RCAR_DU)+= rcar-du-drm.o
> diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c 
> b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
> index ea7e39d03545..7d48db24090b 100644
> --- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
> +++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
> @@ -32,6 +32,11 @@
>  #include "rcar_du_vsp.h"
>  #include "rcar_lvds.h"
>  
> +#include "rcar_du_vdrm.h"
> +#ifdef CONFIG_DRM_RCAR_DU_VDRM
> +#include "../vdrm/vdrm_api.h"
> +#endif

Seems like vdrm_api.h belongs in include/drm/ as we should not pull in
headers like this.

Sam


Re: [PATH 1/4] drm: Add Virtual DRM device driver

2021-06-21 Thread Sam Ravnborg
Hi Tomohito

On Mon, Jun 21, 2021 at 03:44:00PM +0900, Tomohito Esaki wrote:
> Virtual DRM splits the resources of an overlay plane into multiple
> virtual devices to allow each plane to be accessed by each process.
> 
> This makes it possible to overlay images output from multiple processes
> on a display. For example, one process displays the camera image without
> compositor while another process overlays the compositor's drawing of
> the UI.
> 
> The virtual DRM creates standalone virtual device and make DRM planes
> from a master device (e.g. card0) accessible via one or more virtual
> devices. However, these plane are no longer accessible from the original
> device.
> Each virtual device (and plane) can be accessed via a separate
> device file.
> 
> Signed-off-by: Tomohito Esaki 
> ---
>  drivers/gpu/drm/Kconfig |   7 +
>  drivers/gpu/drm/Makefile|   1 +
>  drivers/gpu/drm/vdrm/vdrm_api.h |  68 +++
>  drivers/gpu/drm/vdrm/vdrm_drv.c | 859 
>  drivers/gpu/drm/vdrm/vdrm_drv.h |  80 +++

Plase consider making the header files self-contained.
So there are no hdden dependencies between the two.

Use forward declarations rahter than including header files is possible.

A few trivial comments in the following. I did not try to follow all the
functionality of the driver and I expect others to comment on the idea.

Sam

>  5 files changed, 1015 insertions(+)
>  create mode 100644 drivers/gpu/drm/vdrm/vdrm_api.h
>  create mode 100644 drivers/gpu/drm/vdrm/vdrm_drv.c
>  create mode 100644 drivers/gpu/drm/vdrm/vdrm_drv.h
> 
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index 3c16bd1afd87..ba7f4eeab385 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -294,6 +294,13 @@ config DRM_VKMS
>  
> If M is selected the module will be called vkms.
>  
> +config DRM_VDRM
> + tristate "Virtual DRM"
> + depends on DRM
> + help
> +   Virtual DRM splits the resources of an overlay plane into multiple
> +   virtual devices to allow each plane to be accessed by each process.
Could you look into pulling a bit more info here. You made a very nice
intro to the patch, consider using it in the help text too.


> +
>  source "drivers/gpu/drm/exynos/Kconfig"
>  
>  source "drivers/gpu/drm/rockchip/Kconfig"
> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> index 5279db4392df..55dbf85e2579 100644
> --- a/drivers/gpu/drm/Makefile
> +++ b/drivers/gpu/drm/Makefile
> @@ -82,6 +82,7 @@ obj-$(CONFIG_DRM_VMWGFX)+= vmwgfx/
>  obj-$(CONFIG_DRM_VIA)+=via/
>  obj-$(CONFIG_DRM_VGEM)   += vgem/
>  obj-$(CONFIG_DRM_VKMS)   += vkms/
> +obj-$(CONFIG_DRM_VDRM)   += vdrm/
Alphabetic order (mostly) so before vgem/

>  obj-$(CONFIG_DRM_NOUVEAU) +=nouveau/
>  obj-$(CONFIG_DRM_EXYNOS) +=exynos/
>  obj-$(CONFIG_DRM_ROCKCHIP) +=rockchip/
> diff --git a/drivers/gpu/drm/vdrm/vdrm_api.h b/drivers/gpu/drm/vdrm/vdrm_api.h
> new file mode 100644
> index ..dd4d7e774800
> --- /dev/null
> +++ b/drivers/gpu/drm/vdrm/vdrm_api.h
> @@ -0,0 +1,68 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * vdrm_api.h -- Virtual DRM API
> + *
> + * Copyright (C) 2021 Renesas Electronics Corporation
> + */
> +
> +#ifndef __VDRM_API__
> +#define __VDRM_API__
> +
> +#include 
> +#include 
> +
> +/**
> + * struct vdrm_property_info - Information about the properties passed from
> + *  the DRM driver to vDRM
> + * @prop: Parent property to pass to vDRM
> + * @default_val: Default value for the property passed to vDRM
> + */
> +struct vdrm_property_info {
> + struct drm_property *prop;
> + uint64_t default_val;
> +};
It would be nice that all structs used inline comments - and then you
are consistent too.

> +
> +/**
> + * struct vdrm_funcs - Callbacks to parent DRM driver
> + */
> +struct vdrm_funcs {
> + /**
> +  * @dumb_create:
> +  *
> +  * Called by _driver.dumb_create. Please read the documentation
> +  * for the _driver.dumb_create hook for more details.
> +  */
> + int (*dumb_create)(struct drm_file *file, struct drm_device *dev,
> +struct drm_mode_create_dumb *args);
> +
> + /**
> +  * @crtc_flush:
> +  *
> +  * Called by _crtc_helper_funcs.atomic_flush. Please read the
> +  * documentation for the _crtc_helper_funcs.atomic_flush hook for
> +  * more details.
> +  */
> + void (*crtc_flush)(struct drm_crtc *crtc);
> +};
> +
> +struct vdrm_device;
> +struct vdrm_display;
> +
> +void vdrm_drv_handle_vblank(struct vdrm_display *vdisplay);
> +void vdrm_drv_finish_page_flip(struct vdrm_display *vdisplay);
> +struct vdrm_device *vdrm_drv_init(struct drm_device *dev,
> +   struct device_node *np, int num_props,
> +   struct vdrm_property_info *props,
> +   const struct vdrm_funcs *funcs);
> +int 

Re: [PATCH] dma-buf: Document non-dynamic exporter expectations better

2021-06-21 Thread Christian König

Am 21.06.21 um 17:17 schrieb Daniel Vetter:

Christian and me realized we have a pretty massive disconnect about
different interpretations of what dma_resv is used for by different
drivers. The discussion is much, much bigger than this change here,
but this is an important one:

Non-dynamic exporters must guarantee that the memory they return is
ready for use. They cannot expect importers to wait for the exclusive
fence. Only dynamic importers are required to obey the dma_resv fences
strictly (and more patches are needed to define exactly what this
means).

Christian has patches to update nouvea, radeon and amdgpu. The only
other driver using both ttm and supporting dma-buf export is qxl,
which only uses synchronous ttm_bo_move.

v2: To hammer this in document that dynamic importers _must_ wait for
the exclusive fence after having called dma_buf_map_attachment.

Cc: Christian König 
Signed-off-by: Daniel Vetter 


Reviewed-by: Christian König 


---
  drivers/dma-buf/dma-buf.c |  3 +++
  include/linux/dma-buf.h   | 15 +++
  2 files changed, 18 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index e3ba5db5f292..65cbd7f0f16a 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -956,6 +956,9 @@ EXPORT_SYMBOL_GPL(dma_buf_unpin);
   * the underlying backing storage is pinned for as long as a mapping exists,
   * therefore users/importers should not hold onto a mapping for undue amounts 
of
   * time.
+ *
+ * Important: Dynamic importers must wait for the exclusive fence of the struct
+ * dma_resv attached to the DMA-BUF first.
   */
  struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
enum dma_data_direction direction)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 342585bd6dff..92eec38a03aa 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -96,6 +96,12 @@ struct dma_buf_ops {
 * This is called automatically for non-dynamic importers from
 * dma_buf_attach().
 *
+* Note that similar to non-dynamic exporters in their @map_dma_buf
+* callback the driver must guarantee that the memory is available for
+* use and cleared of any old data by the time this function returns.
+* Drivers which pipeline their buffer moves internally must wait for
+* all moves and clears to complete.
+*
 * Returns:
 *
 * 0 on success, negative error code on failure.
@@ -144,6 +150,15 @@ struct dma_buf_ops {
 * This is always called with the dmabuf->resv object locked when
 * the dynamic_mapping flag is true.
 *
+* Note that for non-dynamic exporters the driver must guarantee that
+* that the memory is available for use and cleared of any old data by
+* the time this function returns.  Drivers which pipeline their buffer
+* moves internally must wait for all moves and clears to complete.
+* Dynamic exporters do not need to follow this rule: For non-dynamic
+* importers the buffer is already pinned through @pin, which has the
+* same requirements. Dynamic importers otoh are required to obey the
+* dma_resv fences.
+*
 * Returns:
 *
 * A _table scatter list of or the backing storage of the DMA buffer,




Re: [PATCH 1/3] drm/nouveau: wait for moving fence after pinning

2021-06-21 Thread Daniel Vetter
On Mon, Jun 21, 2021 at 5:49 PM Christian König
 wrote:
>
> Am 21.06.21 um 16:54 schrieb Daniel Vetter:
> > On Mon, Jun 21, 2021 at 03:03:26PM +0200, Christian König wrote:
> >> We actually need to wait for the moving fence after pinning
> >> the BO to make sure that the pin is completed.
> >>
> >> Signed-off-by: Christian König 
> >> CC: sta...@kernel.org
> >> ---
> >>   drivers/gpu/drm/nouveau/nouveau_prime.c | 8 +++-
> >>   1 file changed, 7 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c 
> >> b/drivers/gpu/drm/nouveau/nouveau_prime.c
> >> index 347488685f74..591738545eba 100644
> >> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> >> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> >> @@ -93,7 +93,13 @@ int nouveau_gem_prime_pin(struct drm_gem_object *obj)
> >>  if (ret)
> >>  return -EINVAL;
> >>
> >> -return 0;
> >> +if (nvbo->bo.moving) {
> > Don't we need to hold the dma_resv to read this? We can grab a reference
> > and then unlock, but I think just unlocked wait can go boom pretty easily
> > (since we don't hold a reference or lock so someone else can jump in and
> > free the moving fence).
>
> The moving fence is only modified while the BO is moved and since we
> have just successfully pinned it

Yeah  ... so probably correct, but really tricky. Just wrapping a
ttm_bo_reserve/unreserve around the code you add should be enough and
get the job done?

> But in general I agree that it would be better to avoid this. I just
> didn't wanted to open a bigger can of worms by changing nouveau so much.

Yeah, but I'm kinda thinking of some helpers to wait for the move
fence (so that later on we can switch from having the exclusive fence
to the move fence do that, maybe). And then locking checks in there
would be nice.

Also avoids the case of explaining why lockless here is fine, but
lockless wait for the exclusive fence in e.g. a dynami dma-buf
importer is very much not fine at all. Just all around less trouble.
-Daniel

>
> Christian.
>
> > -Daniel
> >
> >> +ret = dma_fence_wait(nvbo->bo.moving, true);
> >> +if (ret)
> >> +nouveau_bo_unpin(nvbo);
> >> +}
> >> +
> >> +return ret;
> >>   }
> >>
> >>   void nouveau_gem_prime_unpin(struct drm_gem_object *obj)
> >> --
> >> 2.25.1
> >>
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


Re: [PATCH 1/3] drm/nouveau: wait for moving fence after pinning

2021-06-21 Thread Christian König

Am 21.06.21 um 16:54 schrieb Daniel Vetter:

On Mon, Jun 21, 2021 at 03:03:26PM +0200, Christian König wrote:

We actually need to wait for the moving fence after pinning
the BO to make sure that the pin is completed.

Signed-off-by: Christian König 
CC: sta...@kernel.org
---
  drivers/gpu/drm/nouveau/nouveau_prime.c | 8 +++-
  1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c 
b/drivers/gpu/drm/nouveau/nouveau_prime.c
index 347488685f74..591738545eba 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -93,7 +93,13 @@ int nouveau_gem_prime_pin(struct drm_gem_object *obj)
if (ret)
return -EINVAL;
  
-	return 0;

+   if (nvbo->bo.moving) {

Don't we need to hold the dma_resv to read this? We can grab a reference
and then unlock, but I think just unlocked wait can go boom pretty easily
(since we don't hold a reference or lock so someone else can jump in and
free the moving fence).


The moving fence is only modified while the BO is moved and since we 
have just successfully pinned it


But in general I agree that it would be better to avoid this. I just 
didn't wanted to open a bigger can of worms by changing nouveau so much.


Christian.


-Daniel


+   ret = dma_fence_wait(nvbo->bo.moving, true);
+   if (ret)
+   nouveau_bo_unpin(nvbo);
+   }
+
+   return ret;
  }
  
  void nouveau_gem_prime_unpin(struct drm_gem_object *obj)

--
2.25.1





Re: [PATCH v2 08/12] drm/panfrost: Do the exception -> string translation using a table

2021-06-21 Thread Boris Brezillon
On Mon, 21 Jun 2021 16:19:38 +0100
Steven Price  wrote:

> On 21/06/2021 14:39, Boris Brezillon wrote:
> > Do the exception -> string translation using a table so we can add extra
> > fields if we need to. While at it add an error field to ease the
> > exception -> error conversion which we'll need if we want to set the
> > fence error to something that reflects the exception code.
> > 
> > TODO: fix the error codes.  
> 
> TODO: Do the TODO ;)

Yeah, I was kinda expecting help with that :-).

> 
> I'm not sure how useful translating the hardware error codes to Linux
> ones are. E.g. 'OOM' means something quite different from a normal
> -ENOMEM. One is running out of a space in a predefined buffer, the other
> is Linux not able to allocate memory.

Okay, then I can just unconditionally set the fence error to -EINVAL
and drop this error field.

> 
> > 
> > Signed-off-by: Boris Brezillon 
> > ---
> >  drivers/gpu/drm/panfrost/panfrost_device.c | 134 +
> >  drivers/gpu/drm/panfrost/panfrost_device.h |   1 +
> >  2 files changed, 88 insertions(+), 47 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c 
> > b/drivers/gpu/drm/panfrost/panfrost_device.c
> > index f7f5ca94f910..2de011cee258 100644
> > --- a/drivers/gpu/drm/panfrost/panfrost_device.c
> > +++ b/drivers/gpu/drm/panfrost/panfrost_device.c
> > @@ -292,55 +292,95 @@ void panfrost_device_fini(struct panfrost_device 
> > *pfdev)
> > panfrost_clk_fini(pfdev);
> >  }
> >  
> > -const char *panfrost_exception_name(u32 exception_code)
> > -{
> > -   switch (exception_code) {
> > -   /* Non-Fault Status code */
> > -   case 0x00: return "NOT_STARTED/IDLE/OK";
> > -   case 0x01: return "DONE";
> > -   case 0x02: return "INTERRUPTED";
> > -   case 0x03: return "STOPPED";
> > -   case 0x04: return "TERMINATED";
> > -   case 0x08: return "ACTIVE";
> > -   /* Job exceptions */
> > -   case 0x40: return "JOB_CONFIG_FAULT";
> > -   case 0x41: return "JOB_POWER_FAULT";
> > -   case 0x42: return "JOB_READ_FAULT";
> > -   case 0x43: return "JOB_WRITE_FAULT";
> > -   case 0x44: return "JOB_AFFINITY_FAULT";
> > -   case 0x48: return "JOB_BUS_FAULT";
> > -   case 0x50: return "INSTR_INVALID_PC";
> > -   case 0x51: return "INSTR_INVALID_ENC";
> > -   case 0x52: return "INSTR_TYPE_MISMATCH";
> > -   case 0x53: return "INSTR_OPERAND_FAULT";
> > -   case 0x54: return "INSTR_TLS_FAULT";
> > -   case 0x55: return "INSTR_BARRIER_FAULT";
> > -   case 0x56: return "INSTR_ALIGN_FAULT";
> > -   case 0x58: return "DATA_INVALID_FAULT";
> > -   case 0x59: return "TILE_RANGE_FAULT";
> > -   case 0x5A: return "ADDR_RANGE_FAULT";
> > -   case 0x60: return "OUT_OF_MEMORY";
> > -   /* GPU exceptions */
> > -   case 0x80: return "DELAYED_BUS_FAULT";
> > -   case 0x88: return "SHAREABILITY_FAULT";
> > -   /* MMU exceptions */
> > -   case 0xC1: return "TRANSLATION_FAULT_LEVEL1";
> > -   case 0xC2: return "TRANSLATION_FAULT_LEVEL2";
> > -   case 0xC3: return "TRANSLATION_FAULT_LEVEL3";
> > -   case 0xC4: return "TRANSLATION_FAULT_LEVEL4";
> > -   case 0xC8: return "PERMISSION_FAULT";
> > -   case 0xC9 ... 0xCF: return "PERMISSION_FAULT";
> > -   case 0xD1: return "TRANSTAB_BUS_FAULT_LEVEL1";
> > -   case 0xD2: return "TRANSTAB_BUS_FAULT_LEVEL2";
> > -   case 0xD3: return "TRANSTAB_BUS_FAULT_LEVEL3";
> > -   case 0xD4: return "TRANSTAB_BUS_FAULT_LEVEL4";
> > -   case 0xD8: return "ACCESS_FLAG";
> > -   case 0xD9 ... 0xDF: return "ACCESS_FLAG";
> > -   case 0xE0 ... 0xE7: return "ADDRESS_SIZE_FAULT";
> > -   case 0xE8 ... 0xEF: return "MEMORY_ATTRIBUTES_FAULT";
> > +#define PANFROST_EXCEPTION(id, err) \
> > +   [DRM_PANFROST_EXCEPTION_ ## id] = { \
> > +   .name = #id, \
> > +   .error = err, \
> > }
> >  
> > -   return "UNKNOWN";
> > +struct panfrost_exception_info {
> > +   const char *name;
> > +   int error;
> > +};
> > +
> > +static const struct panfrost_exception_info panfrost_exception_infos[] = {
> > +   PANFROST_EXCEPTION(OK, 0),
> > +   PANFROST_EXCEPTION(DONE, 0),
> > +   PANFROST_EXCEPTION(STOPPED, 0),
> > +   PANFROST_EXCEPTION(TERMINATED, 0),  
> 
> STOPPED/TERMINATED are not really 'success' from an application
> perspective. But equally they are ones that need special handling from
> the kernel.

STOPPED is a temporary state (at least it is right now), so the error
code doesn't matter much (the job is expected to be resumed before the
job fence is signaled and the final error assigned). TERMINATED should
probably have a valid error code reflecting the fact that the job
didn't finish properly so that any waiter knows the result of the
rendering is invalid.

> 
> > +   PANFROST_EXCEPTION(KABOOM, 0),
> > +   PANFROST_EXCEPTION(EUREKA, 0),
> > +   PANFROST_EXCEPTION(ACTIVE, 0),
> > +   PANFROST_EXCEPTION(JOB_CONFIG_FAULT, -EINVAL),
> > +   PANFROST_EXCEPTION(JOB_POWER_FAULT, -ECANCELED),
> > +   PANFROST_EXCEPTION(JOB_READ_FAULT, -EINVAL),
> > +   PANFROST_EXCEPTION(JOB_WRITE_FAULT, 

Re: [PATCH v2 12/12] drm/panfrost: Shorten the fence signalling section

2021-06-21 Thread Steven Price
On 21/06/2021 14:39, Boris Brezillon wrote:
> panfrost_reset() does not directly signal fences, but
> panfrost_scheduler_start() does, when calling drm_sched_start().

I have to admit to not fully understanding dma_fence_begin_signalling()
- but I thought the idea was that it should have a relatively wide
length to actually catch locking bugs. Just wrapping drm_sched_start()
looks wrong: i.e. why isn't this code just contained in drm_sched_start()?

The relevant section from the docs reads:

>  * * All code necessary to complete a _fence must be annotated, from the
>  *   point where a fence is accessible to other threads, to the point where
>  *   dma_fence_signal() is called. Un-annotated code can contain deadlock 
> issues,
>  *   and due to the very strict rules and many corner cases it is infeasible 
> to
>  *   catch these just with review or normal stress testing

So it makes sense that we annotate code from when the reset is started
to after the signalling has happened. That way if there are any locks
taken in the reset path which could be blocked waiting for any of the
fences which might be signalled we get moaned at by lockdep.

Steve

> Signed-off-by: Boris Brezillon 
> ---
>  drivers/gpu/drm/panfrost/panfrost_job.c | 7 +++
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c 
> b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 74b63e1ee6d9..cf6abe0fdf47 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -414,6 +414,7 @@ static bool panfrost_scheduler_stop(struct 
> panfrost_queue_state *queue,
>  static void panfrost_scheduler_start(struct panfrost_queue_state *queue)
>  {
>   enum panfrost_queue_status old_status;
> + bool cookie;
>  
>   mutex_lock(>lock);
>   old_status = atomic_xchg(>status,
> @@ -423,7 +424,9 @@ static void panfrost_scheduler_start(struct 
> panfrost_queue_state *queue)
>   /* Restore the original timeout before starting the scheduler. */
>   queue->sched.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
>   drm_sched_resubmit_jobs(>sched);
> + cookie = dma_fence_begin_signalling();
>   drm_sched_start(>sched, true);
> + dma_fence_end_signalling(cookie);
>   old_status = atomic_xchg(>status,
>PANFROST_QUEUE_STATUS_ACTIVE);
>   if (old_status == PANFROST_QUEUE_STATUS_FAULT_PENDING)
> @@ -566,9 +569,7 @@ static void panfrost_reset(struct work_struct *work)
>reset.work);
>   unsigned long flags;
>   unsigned int i;
> - bool cookie;
>  
> - cookie = dma_fence_begin_signalling();
>   for (i = 0; i < NUM_JOB_SLOTS; i++) {
>   /*
>* We want pending timeouts to be handled before we attempt
> @@ -608,8 +609,6 @@ static void panfrost_reset(struct work_struct *work)
>  
>   for (i = 0; i < NUM_JOB_SLOTS; i++)
>   panfrost_scheduler_start(>js->queue[i]);
> -
> - dma_fence_end_signalling(cookie);
>  }
>  
>  int panfrost_job_init(struct panfrost_device *pfdev)
> 



Re: [PATCH] drm/amdgpu: fix amdgpu_preempt_mgr_new()

2021-06-21 Thread Deucher, Alexander
[Public]

I've dropped it from my tree in that case.

From: Christian König 
Sent: Monday, June 21, 2021 6:27 AM
To: Alex Deucher ; Kuehling, Felix 

Cc: David Airlie ; Pan, Xinhui ; 
kernel-janit...@vger.kernel.org ; Maling list 
- DRI developers ; amd-gfx list 
; Daniel Vetter ; Deucher, 
Alexander ; Dave Airlie ; 
Koenig, Christian ; Dan Carpenter 

Subject: Re: [PATCH] drm/amdgpu: fix amdgpu_preempt_mgr_new()

Am 18.06.21 um 23:18 schrieb Alex Deucher:
> On Fri, Jun 18, 2021 at 11:40 AM Felix Kuehling  
> wrote:
>> Am 2021-06-18 um 4:39 a.m. schrieb Christian König:
>>> Am 18.06.21 um 10:37 schrieb Dan Carpenter:
 There is a reversed if statement in amdgpu_preempt_mgr_new() so it
 always returns -ENOMEM.

 Fixes: 09b020bb05a5 ("Merge tag 'drm-misc-next-2021-06-09' of
 git://anongit.freedesktop.org/drm/drm-misc into drm-next")
 Signed-off-by: Dan Carpenter 
>>> Most be some fallout from merging it with the TTM changes.
>>>
>>> Anyway, patch is Reviewed-by: Christian König 
>> This is obviously not for amd-staging-drm-next. Christian, are you going
>> to apply it to the relevant branches?
> I've applied it to my drm-next branch.

I already pushed it to drm-misc-next last week.

Christian.

>
> Alex
>
>
>> Thanks,
>>Felix
>>
>>
>>> Thanks,
>>> Christian.
>>>
 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_preempt_mgr.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

 diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_preempt_mgr.c
 b/drivers/gpu/drm/amd/amdgpu/amdgpu_preempt_mgr.c
 index f6aff7ce5160..d02c8637f909 100644
 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_preempt_mgr.c
 +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_preempt_mgr.c
 @@ -71,7 +71,7 @@ static int amdgpu_preempt_mgr_new(struct
 ttm_resource_manager *man,
struct amdgpu_preempt_mgr *mgr = to_preempt_mgr(man);
  *res = kzalloc(sizeof(**res), GFP_KERNEL);
 -if (*res)
 +if (!*res)
return -ENOMEM;
  ttm_resource_init(tbo, place, *res);
>> ___
>> amd-gfx mailing list
>> amd-...@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfxdata=04%7C01%7Calexander.deucher%40amd.com%7C096813db12f24172870508d9349f375a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637598680703030828%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=%2Ffg7TXDA9%2F%2Fjin8T5f3V11fAv3PVvtDFluNHnhwyOGM%3Dreserved=0
> ___
> amd-gfx mailing list
> amd-...@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfxdata=04%7C01%7Calexander.deucher%40amd.com%7C096813db12f24172870508d9349f375a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637598680703030828%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=%2Ffg7TXDA9%2F%2Fjin8T5f3V11fAv3PVvtDFluNHnhwyOGM%3Dreserved=0



Re: [v7 5/5] drm/panel-simple: Add Samsung ATNA33XC20

2021-06-21 Thread Doug Anderson
Hi,

On Sun, Jun 20, 2021 at 3:01 AM Sam Ravnborg  wrote:
>
> Hi Rajeev
> On Sat, Jun 19, 2021 at 04:10:30PM +0530, Rajeev Nandan wrote:
> > Add Samsung 13.3" FHD eDP AMOLED panel.
> >
> > Signed-off-by: Rajeev Nandan 
> > Reviewed-by: Douglas Anderson 
> > ---
> >
> > Changes in v4:
> > - New
> >
> > Changes in v5:
> > - Remove "uses_dpcd_backlight" property, not required now. (Douglas)
> >
> > Changes in v7:
> > - Update disable_to_power_off and power_to_enable delays. (Douglas)
> >
> >  drivers/gpu/drm/panel/panel-simple.c | 33 +
> >  1 file changed, 33 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/panel/panel-simple.c 
> > b/drivers/gpu/drm/panel/panel-simple.c
> > index 86e5a45..4adc44a 100644
> > --- a/drivers/gpu/drm/panel/panel-simple.c
> > +++ b/drivers/gpu/drm/panel/panel-simple.c
> > @@ -3562,6 +3562,36 @@ static const struct panel_desc 
> > rocktech_rk101ii01d_ct = {
> >   .connector_type = DRM_MODE_CONNECTOR_LVDS,
> >  };
> >
> > +static const struct drm_display_mode samsung_atna33xc20_mode = {
> > + .clock = 138770,
> > + .hdisplay = 1920,
> > + .hsync_start = 1920 + 48,
> > + .hsync_end = 1920 + 48 + 32,
> > + .htotal = 1920 + 48 + 32 + 80,
> > + .vdisplay = 1080,
> > + .vsync_start = 1080 + 8,
> > + .vsync_end = 1080 + 8 + 8,
> > + .vtotal = 1080 + 8 + 8 + 16,
> > + .flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_NVSYNC,
> > +};
> > +
> > +static const struct panel_desc samsung_atna33xc20 = {
> > + .modes = _atna33xc20_mode,
> > + .num_modes = 1,
> > + .bpc = 10,
> > + .size = {
> > + .width = 294,
> > + .height = 165,
> > + },
> > + .delay = {
> > + .disable_to_power_off = 200,
> > + .power_to_enable = 400,
> > + .hpd_absent_delay = 200,
> > + .unprepare = 500,
> > + },
> > + .connector_type = DRM_MODE_CONNECTOR_eDP,
> > +};
>
> bus_format is missing. There should be a warning about this when you
> probe the display.

Sam: I'm curious about the requirement of hardcoding bus_format like
this for eDP panels. Most eDP panels support a variety of bits per
pixel and do so dynamically. Ones I've poked at freely support 6bpp
and 8bpp. Presumably this one supports both of those modes and also
10bpp. I haven't done detailed research on it, but it would also
surprise me if the "bus format" for a given bpp needed to be specified
for eDP. Presumably since eDP has most of the "autodetect" type
features of DP then if the format needed to be accounted for that you
could query the hardware?

Looking at the datasheet for the ti-sn65dsi86 MIPI-to-eDP bridge chip
I see that it explicitly calls out the bus formats that it supports
for the MIPI side but doesn't call out anything for eDP. That would
tend to support my belief that there isn't variance on the eDP side...

Maybe the right fix is to actually change the check not to give a
warning for eDP panels? ...or am I misunderstanding?


> The bpc of 10 in unusual, the current code warns if bpc is neither 6 nor
> 8. If 10 is correct then update the code to accept bpc=10.

I'm pretty sure it's 10 based on this panel's datasheet, though this
panel also accepts 8 bpc. Fixing the warning seems like a good idea to
me--I wasn't aware of it.

-Doug


Re: [PATCH v2 11/12] drm/panfrost: Make ->run_job() return an ERR_PTR() when appropriate

2021-06-21 Thread Steven Price
On 21/06/2021 14:39, Boris Brezillon wrote:
> If the fence creation fail, we can return the error pointer directly.
> The core will update the fence error accordingly.
> 
> Signed-off-by: Boris Brezillon 

Reviewed-by: Steven Price 

> ---
>  drivers/gpu/drm/panfrost/panfrost_job.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c 
> b/drivers/gpu/drm/panfrost/panfrost_job.c
> index a51fa0a81367..74b63e1ee6d9 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -355,7 +355,7 @@ static struct dma_fence *panfrost_job_run(struct 
> drm_sched_job *sched_job)
>  
>   fence = panfrost_fence_create(pfdev, slot);
>   if (IS_ERR(fence))
> - return NULL;
> + return fence;
>  
>   if (job->done_fence)
>   dma_fence_put(job->done_fence);
> 



Re: [PATCH v2 05/12] drm/panfrost: Disable the AS on unhandled page faults

2021-06-21 Thread Boris Brezillon
On Mon, 21 Jun 2021 16:09:32 +0100
Steven Price  wrote:

> On 21/06/2021 14:39, Boris Brezillon wrote:
> > If we don't do that, we have to wait for the job timeout to expire
> > before the fault jobs gets killed.
> > 
> > Signed-off-by: Boris Brezillon   
> 
> Don't we need to do something here to allow recovery of the MMU context
> in the future? panfrost_mmu_disable() will zero out the MMU registers on
> the hardware, but AFAICS panfrost_mmu_enable() won't be called to
> restore the values until something evicts the address space (GPU power
> down/reset or just too many other processes).
> 
> The ideal would be to block submission of new jobs from this context and
> then wait until existing jobs have completed at which point the MMU
> state can be restored and jobs allowed again.

Uh, I assumed it'd be okay to have subsequent jobs coming from
this context to fail with a BUS_FAULT until the context is closed. But
what you suggest seems more robust.

> 
> But at a minimum I think we should have something like an 'MMU poisoned'
> bit that panfrost_mmu_as_get() can check.
> 
> Steve
> 
> > ---
> >  drivers/gpu/drm/panfrost/panfrost_mmu.c | 6 +-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c 
> > b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> > index 2a9bf30edc9d..d5c624e776f1 100644
> > --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> > +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> > @@ -661,7 +661,7 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int 
> > irq, void *data)
> > if ((status & mask) == BIT(as) && (exception_type & 0xF8) == 
> > 0xC0)
> > ret = panfrost_mmu_map_fault_addr(pfdev, as, addr);
> >  
> > -   if (ret)
> > +   if (ret) {
> > /* terminal fault, print info about the fault */
> > dev_err(pfdev->dev,
> > "Unhandled Page fault in AS%d at VA 0x%016llX\n"
> > @@ -679,6 +679,10 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int 
> > irq, void *data)
> > access_type, access_type_name(pfdev, 
> > fault_status),
> > source_id);
> >  
> > +   /* Disable the MMU to stop jobs on this AS immediately 
> > */
> > +   panfrost_mmu_disable(pfdev, as);
> > +   }
> > +
> > status &= ~mask;
> >  
> > /* If we received new MMU interrupts, process them before 
> > returning. */
> >   
> 



Re: [PATCH v2 10/12] drm/panfrost: Kill in-flight jobs on FD close

2021-06-21 Thread Steven Price
On 21/06/2021 14:39, Boris Brezillon wrote:
> If the process who submitted these jobs decided to close the FD before
> the jobs are done it probably means it doesn't care about the result.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  drivers/gpu/drm/panfrost/panfrost_job.c | 33 +
>  1 file changed, 28 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c 
> b/drivers/gpu/drm/panfrost/panfrost_job.c
> index aedc604d331c..a51fa0a81367 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -494,14 +494,22 @@ static irqreturn_t panfrost_job_irq_handler(int irq, 
> void *data)
>   if (status & JOB_INT_MASK_ERR(j)) {
>   enum panfrost_queue_status old_status;
>   u32 js_status = job_read(pfdev, JS_STATUS(j));
> + int error = panfrost_exception_to_error(js_status);
> + const char *exception_name = 
> panfrost_exception_name(js_status);

NIT: I'm not sure if it's worth it, but it feels like a function which
returns both the name and error-code would make sense. E.g. making
struct panfrost_exception_info public.

>  
>   job_write(pfdev, JS_COMMAND_NEXT(j), JS_COMMAND_NOP);
>  
> - dev_err(pfdev->dev, "js fault, js=%d, status=%s, 
> head=0x%x, tail=0x%x",
> - j,
> - panfrost_exception_name(js_status),
> - job_read(pfdev, JS_HEAD_LO(j)),
> - job_read(pfdev, JS_TAIL_LO(j)));
> + if (!error) {
> + dev_dbg(pfdev->dev, "js interrupt, js=%d, 
> status=%s, head=0x%x, tail=0x%x",
> + j, exception_name,
> + job_read(pfdev, JS_HEAD_LO(j)),
> + job_read(pfdev, JS_TAIL_LO(j)));
> + } else {
> + dev_err(pfdev->dev, "js fault, js=%d, 
> status=%s, head=0x%x, tail=0x%x",
> + j, exception_name,
> + job_read(pfdev, JS_HEAD_LO(j)),
> + job_read(pfdev, JS_TAIL_LO(j)));
> + }

Again here you're going to have issues with TERMINATED - dev_err() is
probably too chatty, so just changing panfrost_exception_to_error() to
return an error value is going to cause problems here.

Steve

>  
>   /* If we need a reset, signal it to the reset handler,
>* otherwise, update the fence error field and signal
> @@ -688,10 +696,25 @@ int panfrost_job_open(struct panfrost_file_priv 
> *panfrost_priv)
>  
>  void panfrost_job_close(struct panfrost_file_priv *panfrost_priv)
>  {
> + struct panfrost_device *pfdev = panfrost_priv->pfdev;
> + unsigned long flags;
>   int i;
>  
>   for (i = 0; i < NUM_JOB_SLOTS; i++)
>   drm_sched_entity_destroy(_priv->sched_entity[i]);
> +
> + /* Kill in-flight jobs */
> + spin_lock_irqsave(>js->job_lock, flags);
> + for (i = 0; i < NUM_JOB_SLOTS; i++) {
> + struct drm_sched_entity *entity = 
> _priv->sched_entity[i];
> + struct panfrost_job *job = pfdev->jobs[i];
> +
> + if (!job || job->base.entity != entity)
> + continue;
> +
> + job_write(pfdev, JS_COMMAND(i), JS_COMMAND_HARD_STOP);
> + }
> + spin_unlock_irqrestore(>js->job_lock, flags);
>  }
>  
>  int panfrost_job_is_idle(struct panfrost_device *pfdev)
> 



Re: [kbuild-all] Re: [PATCH] drm/radeon: Fix NULL dereference when updating memory stats

2021-06-21 Thread Philip Li
On Mon, Jun 21, 2021 at 10:41:57PM +0800, kernel test robot wrote:
> Hi Mikel,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on next-20210618]
> [cannot apply to linus/master v5.13-rc7 v5.13-rc6 v5.13-rc5 v5.13-rc7]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
> 
> url:
> https://github.com/0day-ci/linux/commits/Mikel-Rychliski/drm-radeon-Fix-NULL-dereference-when-updating-memory-stats/20210621-091140
> base:e71e3a48a7e89fa71fb70bf4602367528864d2ff
> config: mips-allyesconfig (attached as .config)
> compiler: mips-linux-gcc (GCC) 9.3.0
> reproduce (this is a W=1 build):
> wget 
> https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
> ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # 
> https://github.com/0day-ci/linux/commit/e5ec8682645a1ee2553fcb073d000802c11d2cb5
> git remote add linux-review https://github.com/0day-ci/linux
> git fetch --no-tags linux-review 
> Mikel-Rychliski/drm-radeon-Fix-NULL-dereference-when-updating-memory-stats/20210621-091140
> git checkout e5ec8682645a1ee2553fcb073d000802c11d2cb5
> # save the attached .config to linux build tree
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross 
> ARCH=mips 
> 
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot 
> 
> All errors (new ones prefixed by >>):
Sorry for the broken report, kindly ignore this, we will fix
this asap.

> 
> >> make[2]: *** No rule to make target 
> >> '/tmp/kernel/mips-allyesconfig/gcc-9.3.0/e5ec8682645a1ee2553fcb073d000802c11d2cb5/lib/modules/5.13.0-rc6-next-20210618+/kernel/drivers/accessibility/speakup/speakup_decpc.ko',
> >>  needed by '__modinst'.
> >> make[2]: *** No rule to make target 
> >> '/tmp/kernel/mips-allyesconfig/gcc-9.3.0/e5ec8682645a1ee2553fcb073d000802c11d2cb5/lib/modules/5.13.0-rc6-next-20210618+/kernel/drivers/base/test/test_async_driver_probe.ko',
> >>  needed by '__modinst'.
> >> make[2]: *** No rule to make target 
> >> '/tmp/kernel/mips-allyesconfig/gcc-9.3.0/e5ec8682645a1ee2553fcb073d000802c11d2cb5/lib/modules/5.13.0-rc6-next-20210618+/kernel/drivers/i2c/i2c-stub.ko',
> >>  needed by '__modinst'.
> >> make[2]: *** No rule to make target 
> >> '/tmp/kernel/mips-allyesconfig/gcc-9.3.0/e5ec8682645a1ee2553fcb073d000802c11d2cb5/lib/modules/5.13.0-rc6-next-20210618+/kernel/drivers/mtd/tests/mtd_nandbiterrs.ko',
> >>  needed by '__modinst'.
> >> make[2]: *** No rule to make target 
> >> '/tmp/kernel/mips-allyesconfig/gcc-9.3.0/e5ec8682645a1ee2553fcb073d000802c11d2cb5/lib/modules/5.13.0-rc6-next-20210618+/kernel/drivers/mtd/tests/mtd_nandecctest.ko',
> >>  needed by '__modinst'.
> >> make[2]: *** No rule to make target 
> >> '/tmp/kernel/mips-allyesconfig/gcc-9.3.0/e5ec8682645a1ee2553fcb073d000802c11d2cb5/lib/modules/5.13.0-rc6-next-20210618+/kernel/drivers/mtd/tests/mtd_oobtest.ko',
> >>  needed by '__modinst'.
> >> make[2]: *** No rule to make target 
> >> '/tmp/kernel/mips-allyesconfig/gcc-9.3.0/e5ec8682645a1ee2553fcb073d000802c11d2cb5/lib/modules/5.13.0-rc6-next-20210618+/kernel/drivers/mtd/tests/mtd_pagetest.ko',
> >>  needed by '__modinst'.
> >> make[2]: *** No rule to make target 
> >> '/tmp/kernel/mips-allyesconfig/gcc-9.3.0/e5ec8682645a1ee2553fcb073d000802c11d2cb5/lib/modules/5.13.0-rc6-next-20210618+/kernel/drivers/mtd/tests/mtd_readtest.ko',
> >>  needed by '__modinst'.
> >> make[2]: *** No rule to make target 
> >> '/tmp/kernel/mips-allyesconfig/gcc-9.3.0/e5ec8682645a1ee2553fcb073d000802c11d2cb5/lib/modules/5.13.0-rc6-next-20210618+/kernel/drivers/mtd/tests/mtd_speedtest.ko',
> >>  needed by '__modinst'.
> >> make[2]: *** No rule to make target 
> >> '/tmp/kernel/mips-allyesconfig/gcc-9.3.0/e5ec8682645a1ee2553fcb073d000802c11d2cb5/lib/modules/5.13.0-rc6-next-20210618+/kernel/drivers/mtd/tests/mtd_stresstest.ko',
> >>  needed by '__modinst'.
> >> make[2]: *** No rule to make target 
> >> '/tmp/kernel/mips-allyesconfig/gcc-9.3.0/e5ec8682645a1ee2553fcb073d000802c11d2cb5/lib/modules/5.13.0-rc6-next-20210618+/kernel/drivers/mtd/tests/mtd_subpagetest.ko',
> >>  needed by '__modinst'.
> >> make[2]: *** No rule to make target 
> >> '/tmp/kernel/mips-allyesconfig/gcc-9.3.0/e5ec8682645a1ee2553fcb073d000802c11d2cb5/lib/modules/5.13.0-rc6-next-20210618+/kernel/drivers/mtd/tests/mtd_torturetest.ko',
> >>  needed by '__modinst'.
> &

Re: [PATCH v2 09/12] drm/panfrost: Don't reset the GPU on job faults unless we really have to

2021-06-21 Thread Steven Price
On 21/06/2021 14:39, Boris Brezillon wrote:
> If we can recover from a fault without a reset there's no reason to
> issue one.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  drivers/gpu/drm/panfrost/panfrost_device.c |  9 ++
>  drivers/gpu/drm/panfrost/panfrost_device.h |  2 ++
>  drivers/gpu/drm/panfrost/panfrost_job.c| 35 ++
>  3 files changed, 34 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c 
> b/drivers/gpu/drm/panfrost/panfrost_device.c
> index 2de011cee258..ac76e8646e97 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_device.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_device.c
> @@ -383,6 +383,15 @@ int panfrost_exception_to_error(u32 exception_code)
>   return panfrost_exception_infos[exception_code].error;
>  }
>  
> +bool panfrost_exception_needs_reset(const struct panfrost_device *pfdev,
> + u32 exception_code)
> +{
> + /* Right now, none of the GPU we support need a reset, but this
> +  * might change (e.g. Valhall GPUs require a when a BUS_FAULT occurs).

NITs:^ some ^ reset

Or just drop the example for now.

> +  */
> + return false;
> +}
> +
>  void panfrost_device_reset(struct panfrost_device *pfdev)
>  {
>   panfrost_gpu_soft_reset(pfdev);
> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h 
> b/drivers/gpu/drm/panfrost/panfrost_device.h
> index 498c7b5dccd0..95e6044008d2 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_device.h
> +++ b/drivers/gpu/drm/panfrost/panfrost_device.h
> @@ -175,6 +175,8 @@ int panfrost_device_suspend(struct device *dev);
>  
>  const char *panfrost_exception_name(u32 exception_code);
>  int panfrost_exception_to_error(u32 exception_code);
> +bool panfrost_exception_needs_reset(const struct panfrost_device *pfdev,
> + u32 exception_code);
>  
>  static inline void
>  panfrost_device_schedule_reset(struct panfrost_device *pfdev)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c 
> b/drivers/gpu/drm/panfrost/panfrost_job.c
> index be5d3e4a1d0a..aedc604d331c 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -493,27 +493,38 @@ static irqreturn_t panfrost_job_irq_handler(int irq, 
> void *data)
>  
>   if (status & JOB_INT_MASK_ERR(j)) {
>   enum panfrost_queue_status old_status;
> + u32 js_status = job_read(pfdev, JS_STATUS(j));
>  
>   job_write(pfdev, JS_COMMAND_NEXT(j), JS_COMMAND_NOP);
>  
>   dev_err(pfdev->dev, "js fault, js=%d, status=%s, 
> head=0x%x, tail=0x%x",
>   j,
> - panfrost_exception_name(job_read(pfdev, 
> JS_STATUS(j))),
> + panfrost_exception_name(js_status),
>   job_read(pfdev, JS_HEAD_LO(j)),
>   job_read(pfdev, JS_TAIL_LO(j)));
>  
> - /*
> -  * When the queue is being restarted we don't report
> -  * faults directly to avoid races between the timeout
> -  * and reset handlers. panfrost_scheduler_start() will
> -  * call drm_sched_fault() after the queue has been
> -  * started if status == FAULT_PENDING.
> + /* If we need a reset, signal it to the reset handler,
> +  * otherwise, update the fence error field and signal
> +  * the job fence.
>*/
> - old_status = atomic_cmpxchg(>js->queue[j].status,
> - 
> PANFROST_QUEUE_STATUS_STARTING,
> - 
> PANFROST_QUEUE_STATUS_FAULT_PENDING);
> - if (old_status == PANFROST_QUEUE_STATUS_ACTIVE)
> - drm_sched_fault(>js->queue[j].sched);
> + if (panfrost_exception_needs_reset(pfdev, js_status)) {
> + /*
> +  * When the queue is being restarted we don't 
> report
> +  * faults directly to avoid races between the 
> timeout
> +  * and reset handlers. 
> panfrost_scheduler_start() will
> +  * call drm_sched_fault() after the queue has 
> been
> +  * started if status == FAULT_PENDING.
> +  */
> + old_status = 
> atomic_cmpxchg(>js->queue[j].status,
> + 
> PANFROST_QUEUE_STATUS_STARTING,
> + 
> PANFROST_QUEUE_STATUS_FAULT_PENDING);
> + if (old_status == PANFROST_QUEUE_STATUS_ACTIVE)

Re: [PATCH v2 08/12] drm/panfrost: Do the exception -> string translation using a table

2021-06-21 Thread Steven Price
On 21/06/2021 14:39, Boris Brezillon wrote:
> Do the exception -> string translation using a table so we can add extra
> fields if we need to. While at it add an error field to ease the
> exception -> error conversion which we'll need if we want to set the
> fence error to something that reflects the exception code.
> 
> TODO: fix the error codes.

TODO: Do the TODO ;)

I'm not sure how useful translating the hardware error codes to Linux
ones are. E.g. 'OOM' means something quite different from a normal
-ENOMEM. One is running out of a space in a predefined buffer, the other
is Linux not able to allocate memory.

> 
> Signed-off-by: Boris Brezillon 
> ---
>  drivers/gpu/drm/panfrost/panfrost_device.c | 134 +
>  drivers/gpu/drm/panfrost/panfrost_device.h |   1 +
>  2 files changed, 88 insertions(+), 47 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c 
> b/drivers/gpu/drm/panfrost/panfrost_device.c
> index f7f5ca94f910..2de011cee258 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_device.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_device.c
> @@ -292,55 +292,95 @@ void panfrost_device_fini(struct panfrost_device *pfdev)
>   panfrost_clk_fini(pfdev);
>  }
>  
> -const char *panfrost_exception_name(u32 exception_code)
> -{
> - switch (exception_code) {
> - /* Non-Fault Status code */
> - case 0x00: return "NOT_STARTED/IDLE/OK";
> - case 0x01: return "DONE";
> - case 0x02: return "INTERRUPTED";
> - case 0x03: return "STOPPED";
> - case 0x04: return "TERMINATED";
> - case 0x08: return "ACTIVE";
> - /* Job exceptions */
> - case 0x40: return "JOB_CONFIG_FAULT";
> - case 0x41: return "JOB_POWER_FAULT";
> - case 0x42: return "JOB_READ_FAULT";
> - case 0x43: return "JOB_WRITE_FAULT";
> - case 0x44: return "JOB_AFFINITY_FAULT";
> - case 0x48: return "JOB_BUS_FAULT";
> - case 0x50: return "INSTR_INVALID_PC";
> - case 0x51: return "INSTR_INVALID_ENC";
> - case 0x52: return "INSTR_TYPE_MISMATCH";
> - case 0x53: return "INSTR_OPERAND_FAULT";
> - case 0x54: return "INSTR_TLS_FAULT";
> - case 0x55: return "INSTR_BARRIER_FAULT";
> - case 0x56: return "INSTR_ALIGN_FAULT";
> - case 0x58: return "DATA_INVALID_FAULT";
> - case 0x59: return "TILE_RANGE_FAULT";
> - case 0x5A: return "ADDR_RANGE_FAULT";
> - case 0x60: return "OUT_OF_MEMORY";
> - /* GPU exceptions */
> - case 0x80: return "DELAYED_BUS_FAULT";
> - case 0x88: return "SHAREABILITY_FAULT";
> - /* MMU exceptions */
> - case 0xC1: return "TRANSLATION_FAULT_LEVEL1";
> - case 0xC2: return "TRANSLATION_FAULT_LEVEL2";
> - case 0xC3: return "TRANSLATION_FAULT_LEVEL3";
> - case 0xC4: return "TRANSLATION_FAULT_LEVEL4";
> - case 0xC8: return "PERMISSION_FAULT";
> - case 0xC9 ... 0xCF: return "PERMISSION_FAULT";
> - case 0xD1: return "TRANSTAB_BUS_FAULT_LEVEL1";
> - case 0xD2: return "TRANSTAB_BUS_FAULT_LEVEL2";
> - case 0xD3: return "TRANSTAB_BUS_FAULT_LEVEL3";
> - case 0xD4: return "TRANSTAB_BUS_FAULT_LEVEL4";
> - case 0xD8: return "ACCESS_FLAG";
> - case 0xD9 ... 0xDF: return "ACCESS_FLAG";
> - case 0xE0 ... 0xE7: return "ADDRESS_SIZE_FAULT";
> - case 0xE8 ... 0xEF: return "MEMORY_ATTRIBUTES_FAULT";
> +#define PANFROST_EXCEPTION(id, err) \
> + [DRM_PANFROST_EXCEPTION_ ## id] = { \
> + .name = #id, \
> + .error = err, \
>   }
>  
> - return "UNKNOWN";
> +struct panfrost_exception_info {
> + const char *name;
> + int error;
> +};
> +
> +static const struct panfrost_exception_info panfrost_exception_infos[] = {
> + PANFROST_EXCEPTION(OK, 0),
> + PANFROST_EXCEPTION(DONE, 0),
> + PANFROST_EXCEPTION(STOPPED, 0),
> + PANFROST_EXCEPTION(TERMINATED, 0),

STOPPED/TERMINATED are not really 'success' from an application
perspective. But equally they are ones that need special handling from
the kernel.

> + PANFROST_EXCEPTION(KABOOM, 0),
> + PANFROST_EXCEPTION(EUREKA, 0),
> + PANFROST_EXCEPTION(ACTIVE, 0),
> + PANFROST_EXCEPTION(JOB_CONFIG_FAULT, -EINVAL),
> + PANFROST_EXCEPTION(JOB_POWER_FAULT, -ECANCELED),
> + PANFROST_EXCEPTION(JOB_READ_FAULT, -EINVAL),
> + PANFROST_EXCEPTION(JOB_WRITE_FAULT, -EINVAL),
> + PANFROST_EXCEPTION(JOB_AFFINITY_FAULT, -EINVAL),
> + PANFROST_EXCEPTION(JOB_BUS_FAULT, -EINVAL),
> + PANFROST_EXCEPTION(INSTR_INVALID_PC, -EINVAL),
> + PANFROST_EXCEPTION(INSTR_INVALID_ENC, -EINVAL),
> + PANFROST_EXCEPTION(INSTR_BARRIER_FAULT, -EINVAL),
> + PANFROST_EXCEPTION(DATA_INVALID_FAULT, -EINVAL),
> + PANFROST_EXCEPTION(TILE_RANGE_FAULT, -EINVAL),
> + PANFROST_EXCEPTION(ADDR_RANGE_FAULT, -EINVAL),
> + PANFROST_EXCEPTION(IMPRECISE_FAULT, -EINVAL),
> + PANFROST_EXCEPTION(OOM, -ENOMEM),
> + PANFROST_EXCEPTION(UNKNOWN, -EINVAL),

We should probably make a distinction between this 'special' UNKNOWN

[PATCH] dma-buf: Document non-dynamic exporter expectations better

2021-06-21 Thread Daniel Vetter
Christian and me realized we have a pretty massive disconnect about
different interpretations of what dma_resv is used for by different
drivers. The discussion is much, much bigger than this change here,
but this is an important one:

Non-dynamic exporters must guarantee that the memory they return is
ready for use. They cannot expect importers to wait for the exclusive
fence. Only dynamic importers are required to obey the dma_resv fences
strictly (and more patches are needed to define exactly what this
means).

Christian has patches to update nouvea, radeon and amdgpu. The only
other driver using both ttm and supporting dma-buf export is qxl,
which only uses synchronous ttm_bo_move.

v2: To hammer this in document that dynamic importers _must_ wait for
the exclusive fence after having called dma_buf_map_attachment.

Cc: Christian König 
Signed-off-by: Daniel Vetter 
---
 drivers/dma-buf/dma-buf.c |  3 +++
 include/linux/dma-buf.h   | 15 +++
 2 files changed, 18 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index e3ba5db5f292..65cbd7f0f16a 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -956,6 +956,9 @@ EXPORT_SYMBOL_GPL(dma_buf_unpin);
  * the underlying backing storage is pinned for as long as a mapping exists,
  * therefore users/importers should not hold onto a mapping for undue amounts 
of
  * time.
+ *
+ * Important: Dynamic importers must wait for the exclusive fence of the struct
+ * dma_resv attached to the DMA-BUF first.
  */
 struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
enum dma_data_direction direction)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 342585bd6dff..92eec38a03aa 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -96,6 +96,12 @@ struct dma_buf_ops {
 * This is called automatically for non-dynamic importers from
 * dma_buf_attach().
 *
+* Note that similar to non-dynamic exporters in their @map_dma_buf
+* callback the driver must guarantee that the memory is available for
+* use and cleared of any old data by the time this function returns.
+* Drivers which pipeline their buffer moves internally must wait for
+* all moves and clears to complete.
+*
 * Returns:
 *
 * 0 on success, negative error code on failure.
@@ -144,6 +150,15 @@ struct dma_buf_ops {
 * This is always called with the dmabuf->resv object locked when
 * the dynamic_mapping flag is true.
 *
+* Note that for non-dynamic exporters the driver must guarantee that
+* that the memory is available for use and cleared of any old data by
+* the time this function returns.  Drivers which pipeline their buffer
+* moves internally must wait for all moves and clears to complete.
+* Dynamic exporters do not need to follow this rule: For non-dynamic
+* importers the buffer is already pinned through @pin, which has the
+* same requirements. Dynamic importers otoh are required to obey the
+* dma_resv fences.
+*
 * Returns:
 *
 * A _table scatter list of or the backing storage of the DMA buffer,
-- 
2.32.0.rc2



Re: [PATCH v2 07/12] drm/panfrost: Reset the GPU when the AS_ACTIVE bit is stuck

2021-06-21 Thread Steven Price
On 21/06/2021 14:39, Boris Brezillon wrote:
> Things are unlikely to resolve until we reset the GPU. Let's not wait
> for other faults/timeout to happen to trigger this reset.
> 
> Signed-off-by: Boris Brezillon 

This one still haunts me... ;)

Reviewed-by: Steven Price 

> ---
>  drivers/gpu/drm/panfrost/panfrost_mmu.c | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c 
> b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> index d5c624e776f1..d20bcaecb78f 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -36,8 +36,11 @@ static int wait_ready(struct panfrost_device *pfdev, u32 
> as_nr)
>   ret = readl_relaxed_poll_timeout_atomic(pfdev->iomem + AS_STATUS(as_nr),
>   val, !(val & AS_STATUS_AS_ACTIVE), 10, 1000);
>  
> - if (ret)
> + if (ret) {
> + /* The GPU hung, let's trigger a reset */
> + panfrost_device_schedule_reset(pfdev);
>   dev_err(pfdev->dev, "AS_ACTIVE bit stuck\n");
> + }
>  
>   return ret;
>  }
> 



Re: [PATCH v2 06/12] drm/panfrost: Expose a helper to trigger a GPU reset

2021-06-21 Thread Steven Price
On 21/06/2021 14:39, Boris Brezillon wrote:
> Expose a helper to trigger a GPU reset so we can easily trigger reset
> operations outside the job timeout handler.
> 
> Signed-off-by: Boris Brezillon 

Reviewed-by: Steven Price 

> ---
>  drivers/gpu/drm/panfrost/panfrost_device.h | 8 
>  drivers/gpu/drm/panfrost/panfrost_job.c| 4 +---
>  2 files changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h 
> b/drivers/gpu/drm/panfrost/panfrost_device.h
> index 2fe1550da7f8..1c6a3597eba0 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_device.h
> +++ b/drivers/gpu/drm/panfrost/panfrost_device.h
> @@ -175,4 +175,12 @@ int panfrost_device_suspend(struct device *dev);
>  
>  const char *panfrost_exception_name(u32 exception_code);
>  
> +static inline void
> +panfrost_device_schedule_reset(struct panfrost_device *pfdev)
> +{
> + /* Schedule a reset if there's no reset in progress. */
> + if (!atomic_xchg(>reset.pending, 1))
> + schedule_work(>reset.work);
> +}
> +
>  #endif
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c 
> b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 1be80b3dd5d0..be5d3e4a1d0a 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -458,9 +458,7 @@ static enum drm_gpu_sched_stat 
> panfrost_job_timedout(struct drm_sched_job
>   if (!panfrost_scheduler_stop(>js->queue[js], sched_job))
>   return DRM_GPU_SCHED_STAT_NOMINAL;
>  
> - /* Schedule a reset if there's no reset in progress. */
> - if (!atomic_xchg(>reset.pending, 1))
> - schedule_work(>reset.work);
> + panfrost_device_schedule_reset(pfdev);
>  
>   return DRM_GPU_SCHED_STAT_NOMINAL;
>  }
> 



[RESEND PATCH 2/3] drm/panel: Add connector_type for some EDT displays

2021-06-21 Thread Stefan Riedmueller
The connector_type for following two EDT displays is missing:
 - EDT ETM0430G0DH6
 - EDT ETM0700G0BDH6

Both are parallel displays thus add the corresponding connector_type.

Signed-off-by: Stefan Riedmueller 
---
 drivers/gpu/drm/panel/panel-simple.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/panel/panel-simple.c 
b/drivers/gpu/drm/panel/panel-simple.c
index 99edd640d700..109dc8c85947 100644
--- a/drivers/gpu/drm/panel/panel-simple.c
+++ b/drivers/gpu/drm/panel/panel-simple.c
@@ -1940,6 +1940,7 @@ static const struct panel_desc edt_etm0430g0dh6 = {
.width = 95,
.height = 54,
},
+   .connector_type = DRM_MODE_CONNECTOR_DPI,
 };
 
 static const struct drm_display_mode edt_et057090dhu_mode = {
@@ -2004,6 +2005,7 @@ static const struct panel_desc edt_etm0700g0bdh6 = {
},
.bus_format = MEDIA_BUS_FMT_RGB666_1X18,
.bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE,
+   .connector_type = DRM_MODE_CONNECTOR_DPI,
 };
 
 static const struct display_timing evervision_vgg804821_timing = {
-- 
2.25.1



Re: [PATCH v2 05/12] drm/panfrost: Disable the AS on unhandled page faults

2021-06-21 Thread Steven Price
On 21/06/2021 14:39, Boris Brezillon wrote:
> If we don't do that, we have to wait for the job timeout to expire
> before the fault jobs gets killed.
> 
> Signed-off-by: Boris Brezillon 

Don't we need to do something here to allow recovery of the MMU context
in the future? panfrost_mmu_disable() will zero out the MMU registers on
the hardware, but AFAICS panfrost_mmu_enable() won't be called to
restore the values until something evicts the address space (GPU power
down/reset or just too many other processes).

The ideal would be to block submission of new jobs from this context and
then wait until existing jobs have completed at which point the MMU
state can be restored and jobs allowed again.

But at a minimum I think we should have something like an 'MMU poisoned'
bit that panfrost_mmu_as_get() can check.

Steve

> ---
>  drivers/gpu/drm/panfrost/panfrost_mmu.c | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c 
> b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> index 2a9bf30edc9d..d5c624e776f1 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -661,7 +661,7 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int 
> irq, void *data)
>   if ((status & mask) == BIT(as) && (exception_type & 0xF8) == 
> 0xC0)
>   ret = panfrost_mmu_map_fault_addr(pfdev, as, addr);
>  
> - if (ret)
> + if (ret) {
>   /* terminal fault, print info about the fault */
>   dev_err(pfdev->dev,
>   "Unhandled Page fault in AS%d at VA 0x%016llX\n"
> @@ -679,6 +679,10 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int 
> irq, void *data)
>   access_type, access_type_name(pfdev, 
> fault_status),
>   source_id);
>  
> + /* Disable the MMU to stop jobs on this AS immediately 
> */
> + panfrost_mmu_disable(pfdev, as);
> + }
> +
>   status &= ~mask;
>  
>   /* If we received new MMU interrupts, process them before 
> returning. */
> 



[RESEND PATCH 3/3] drm/panel: Add bus_format and bus_flags for EDT ETM0430G0DH6

2021-06-21 Thread Stefan Riedmueller
Add corresponding bus_format and bus_flags for the EDT ETM0430G0DH6
display.

Signed-off-by: Stefan Riedmueller 
---
 drivers/gpu/drm/panel/panel-simple.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/panel/panel-simple.c 
b/drivers/gpu/drm/panel/panel-simple.c
index 109dc8c85947..34a24cd6f2c8 100644
--- a/drivers/gpu/drm/panel/panel-simple.c
+++ b/drivers/gpu/drm/panel/panel-simple.c
@@ -1940,6 +1940,8 @@ static const struct panel_desc edt_etm0430g0dh6 = {
.width = 95,
.height = 54,
},
+   .bus_format = MEDIA_BUS_FMT_RGB666_1X18,
+   .bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_SAMPLE_POSEDGE,
.connector_type = DRM_MODE_CONNECTOR_DPI,
 };
 
-- 
2.25.1



[RESEND PATCH 1/3] drm/panel: Add connector_type and bus_format for AUO G104SN02 V2 panel

2021-06-21 Thread Stefan Riedmueller
The AUO G104SN02 V2 is an LVDS display which supports 6 and 8 bpc PSWG.
Add the corresponding connector type and 8 bpc as default bus_format.

Signed-off-by: Stefan Riedmueller 
Reviewed-by: Laurent Pinchart 
---
Hi,
I added the reviewed-by tag from Laurent Pinchart for the RESEND, hope
that is ok.
https://lore.kernel.org/dri-devel/ynchyskddg%2fjs...@pendragon.ideasonboard.com/

 drivers/gpu/drm/panel/panel-simple.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/panel/panel-simple.c 
b/drivers/gpu/drm/panel/panel-simple.c
index be312b5c04dd..99edd640d700 100644
--- a/drivers/gpu/drm/panel/panel-simple.c
+++ b/drivers/gpu/drm/panel/panel-simple.c
@@ -1137,6 +1137,8 @@ static const struct panel_desc auo_g104sn02 = {
.width = 211,
.height = 158,
},
+   .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
+   .connector_type = DRM_MODE_CONNECTOR_LVDS,
 };
 
 static const struct drm_display_mode auo_g121ean01_mode = {
-- 
2.25.1



Re: [PATCH v2 05/12] drm/panfrost: Disable the AS on unhandled page faults

2021-06-21 Thread Boris Brezillon
On Mon, 21 Jun 2021 15:39:00 +0200
Boris Brezillon  wrote:

> If we don't do that, we have to wait for the job timeout to expire
> before the fault jobs gets killed.

 ^ faulty

> 
> Signed-off-by: Boris Brezillon 
> ---
>  drivers/gpu/drm/panfrost/panfrost_mmu.c | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c 
> b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> index 2a9bf30edc9d..d5c624e776f1 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -661,7 +661,7 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int 
> irq, void *data)
>   if ((status & mask) == BIT(as) && (exception_type & 0xF8) == 
> 0xC0)
>   ret = panfrost_mmu_map_fault_addr(pfdev, as, addr);
>  
> - if (ret)
> + if (ret) {
>   /* terminal fault, print info about the fault */
>   dev_err(pfdev->dev,
>   "Unhandled Page fault in AS%d at VA 0x%016llX\n"
> @@ -679,6 +679,10 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int 
> irq, void *data)
>   access_type, access_type_name(pfdev, 
> fault_status),
>   source_id);
>  
> + /* Disable the MMU to stop jobs on this AS immediately 
> */
> + panfrost_mmu_disable(pfdev, as);
> + }
> +
>   status &= ~mask;
>  
>   /* If we received new MMU interrupts, process them before 
> returning. */



[PATCH] dma-buf: Document non-dynamic exporter expectations better

2021-06-21 Thread Daniel Vetter
Christian and me realized we have a pretty massive disconnect about
different interpretations of what dma_resv is used for by different
drivers. The discussion is much, much bigger than this change here,
but this is an important one:

Non-dynamic exporters must guarantee that the memory they return is
ready for use. They cannot expect importers to wait for the exclusive
fence. Only dynamic importers are required to obey the dma_resv fences
strictly (and more patches are needed to define exactly what this
means).

Christian has patches to update nouvea, radeon and amdgpu. The only
other driver using both ttm and supporting dma-buf export is qxl,
which only uses synchronous ttm_bo_move.

Cc: Christian König 
Signed-off-by: Daniel Vetter 
---
 include/linux/dma-buf.h | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 342585bd6dff..92eec38a03aa 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -96,6 +96,12 @@ struct dma_buf_ops {
 * This is called automatically for non-dynamic importers from
 * dma_buf_attach().
 *
+* Note that similar to non-dynamic exporters in their @map_dma_buf
+* callback the driver must guarantee that the memory is available for
+* use and cleared of any old data by the time this function returns.
+* Drivers which pipeline their buffer moves internally must wait for
+* all moves and clears to complete.
+*
 * Returns:
 *
 * 0 on success, negative error code on failure.
@@ -144,6 +150,15 @@ struct dma_buf_ops {
 * This is always called with the dmabuf->resv object locked when
 * the dynamic_mapping flag is true.
 *
+* Note that for non-dynamic exporters the driver must guarantee that
+* that the memory is available for use and cleared of any old data by
+* the time this function returns.  Drivers which pipeline their buffer
+* moves internally must wait for all moves and clears to complete.
+* Dynamic exporters do not need to follow this rule: For non-dynamic
+* importers the buffer is already pinned through @pin, which has the
+* same requirements. Dynamic importers otoh are required to obey the
+* dma_resv fences.
+*
 * Returns:
 *
 * A _table scatter list of or the backing storage of the DMA buffer,
-- 
2.32.0.rc2



Re: [PATCH 1/3] drm/panel: Add connector_type and bus_format for AUO G104SN02 V2 panel

2021-06-21 Thread Stefan Riedmüller
Hi Sam,

On Mon, 2021-06-21 at 16:17 +0200, Sam Ravnborg wrote:
> Hi Stefan,
> 
> On Mon, Jun 21, 2021 at 08:22:10AM +, Stefan Riedmüller wrote:
> > Hi,
> > 
> > another gentle ping.
> > 
> > Also adding Laurent Pinchart to CC.
> 
> Can I ask you to resend the whole lot. I have resurfaced after an
> off-line period and deleted all pending mails.
> 
> I could probarly hunt down the mails somewhere but resend is easier on
> my end.

Sure, no problem, I'll send it out ASAP.

Stefan

> 
>   Sam


Re: [PATCH 3/3] drm/amdgpu: wait for moving fence after pinning

2021-06-21 Thread Daniel Vetter
On Mon, Jun 21, 2021 at 03:03:28PM +0200, Christian König wrote:
> We actually need to wait for the moving fence after pinning
> the BO to make sure that the pin is completed.
> 
> Signed-off-by: Christian König 
> CC: sta...@kernel.org
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 14 +-
>  1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index baa980a477d9..37ec59365080 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -214,9 +214,21 @@ static int amdgpu_dma_buf_pin(struct dma_buf_attachment 
> *attach)
>  {
>   struct drm_gem_object *obj = attach->dmabuf->priv;
>   struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> + int r;
>  
>   /* pin buffer into GTT */
> - return amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT);
> + r = amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT);
> + if (r)
> + return r;
> +
> + if (bo->tbo.moving) {

dma-buf.c guarantees we have the reservation here, so we're fine.

Reviewed-by: Daniel Vetter 

> + r = dma_fence_wait(bo->tbo.moving, true);
> + if (r) {
> + amdgpu_bo_unpin(bo);
> + return r;
> + }
> + }
> + return 0;
>  }
>  
>  /**
> -- 
> 2.25.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


Re: [PATCH 2/3] drm/radeon: wait for moving fence after pinning

2021-06-21 Thread Daniel Vetter
On Mon, Jun 21, 2021 at 03:03:27PM +0200, Christian König wrote:
> We actually need to wait for the moving fence after pinning
> the BO to make sure that the pin is completed.
> 
> Signed-off-by: Christian König 
> CC: sta...@kernel.org
> ---
>  drivers/gpu/drm/radeon/radeon_prime.c | 16 +---
>  1 file changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/radeon/radeon_prime.c 
> b/drivers/gpu/drm/radeon/radeon_prime.c
> index 42a87948e28c..4a90807351e7 100644
> --- a/drivers/gpu/drm/radeon/radeon_prime.c
> +++ b/drivers/gpu/drm/radeon/radeon_prime.c
> @@ -77,9 +77,19 @@ int radeon_gem_prime_pin(struct drm_gem_object *obj)
>  
>   /* pin buffer into GTT */
>   ret = radeon_bo_pin(bo, RADEON_GEM_DOMAIN_GTT, NULL);
> - if (likely(ret == 0))
> - bo->prime_shared_count++;
> -
> + if (unlikely(ret))
> + goto error;
> +
> + if (bo->tbo.moving) {
> + ret = dma_fence_wait(bo->tbo.moving, false);

Here we wait whil holding the reservation, so we should be all fine. Maybe
not the nicest to wait while locked, but also I don't think it'll matter.

Reviewed-by: Daniel Vetter 

> + if (unlikely(ret)) {
> + radeon_bo_unpin(bo);
> + goto error;
> + }
> + }
> +
> + bo->prime_shared_count++;
> +error:
>   radeon_bo_unreserve(bo);
>   return ret;
>  }
> -- 
> 2.25.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


Re: [PATCH v2 04/12] drm/panfrost: Expose exception types to userspace

2021-06-21 Thread Boris Brezillon
On Mon, 21 Jun 2021 15:49:14 +0100
Steven Price  wrote:

> On 21/06/2021 14:38, Boris Brezillon wrote:
> > Job headers contain an exception type field which might be read and
> > converted to a human readable string by tracing tools. Let's expose
> > the exception type as an enum so we share the same definition.
> > 
> > Signed-off-by: Boris Brezillon 
> > ---
> >  include/uapi/drm/panfrost_drm.h | 65 +
> >  1 file changed, 65 insertions(+)
> > 
> > diff --git a/include/uapi/drm/panfrost_drm.h 
> > b/include/uapi/drm/panfrost_drm.h
> > index 061e700dd06c..9a05d57d0118 100644
> > --- a/include/uapi/drm/panfrost_drm.h
> > +++ b/include/uapi/drm/panfrost_drm.h
> > @@ -224,6 +224,71 @@ struct drm_panfrost_madvise {
> > __u32 retained;   /* out, whether backing store still exists */
> >  };
> >  
> > +/* The exception types */
> > +
> > +enum drm_panfrost_exception_type {
> > +   DRM_PANFROST_EXCEPTION_OK = 0x00,
> > +   DRM_PANFROST_EXCEPTION_DONE = 0x01,  
> 
> Any reason to miss INTERRUPTED? Although I don't think you'll ever see it.

Oops, that one is marked 'reserved' on Bifrost. I'll add it.

> 
> > +   DRM_PANFROST_EXCEPTION_STOPPED = 0x03,
> > +   DRM_PANFROST_EXCEPTION_TERMINATED = 0x04,
> > +   DRM_PANFROST_EXCEPTION_KABOOM = 0x05,
> > +   DRM_PANFROST_EXCEPTION_EUREKA = 0x06,  
> 
> Interestingly KABOOM/EUREKA are missing from panfrost_exception_name()

Addressed in patch 8.

> 
> > +   DRM_PANFROST_EXCEPTION_ACTIVE = 0x08,
> > +   DRM_PANFROST_EXCEPTION_JOB_CONFIG_FAULT = 0x40,
> > +   DRM_PANFROST_EXCEPTION_JOB_POWER_FAULT = 0x41,
> > +   DRM_PANFROST_EXCEPTION_JOB_READ_FAULT = 0x42,
> > +   DRM_PANFROST_EXCEPTION_JOB_WRITE_FAULT = 0x43,
> > +   DRM_PANFROST_EXCEPTION_JOB_AFFINITY_FAULT = 0x44,
> > +   DRM_PANFROST_EXCEPTION_JOB_BUS_FAULT = 0x48,
> > +   DRM_PANFROST_EXCEPTION_INSTR_INVALID_PC = 0x50,
> > +   DRM_PANFROST_EXCEPTION_INSTR_INVALID_ENC = 0x51,  
> 
> 0x52: INSTR_TYPE_MISMATCH
> 0x53: INSTR_OPERAND_FAULT
> 0x54: INSTR_TLS_FAULT
> 
> > +   DRM_PANFROST_EXCEPTION_INSTR_BARRIER_FAULT = 0x55,  
> 
> 0x56: INSTR_ALIGN_FAULT
> 
> By the looks of it this is probably the Bifrost list and missing those
> codes which are Midgard only, whereas panfrost_exception_name() looks
> like it's missing some Bifrost status codes.

Yep, I'll add the missing ones.

> 
> Given this is UAPI there is some argument for missing e.g. INTERRUPTED
> (I'm not sure it was ever actually implemented in hardware and the term
> INTERRUPTED might be reused in future), but it seems a bit wrong just to
> have Bifrost values here.

Definitely, I just didn't notice Midgard and Bifrost had different set
of exceptions.

> 
> Steve
> 
> > +   DRM_PANFROST_EXCEPTION_DATA_INVALID_FAULT = 0x58,
> > +   DRM_PANFROST_EXCEPTION_TILE_RANGE_FAULT = 0x59,
> > +   DRM_PANFROST_EXCEPTION_ADDR_RANGE_FAULT = 0x5a,
> > +   DRM_PANFROST_EXCEPTION_IMPRECISE_FAULT = 0x5b,
> > +   DRM_PANFROST_EXCEPTION_OOM = 0x60,
> > +   DRM_PANFROST_EXCEPTION_UNKNOWN = 0x7f,
> > +   DRM_PANFROST_EXCEPTION_DELAYED_BUS_FAULT = 0x80,
> > +   DRM_PANFROST_EXCEPTION_GPU_SHAREABILITY_FAULT = 0x88,
> > +   DRM_PANFROST_EXCEPTION_SYS_SHAREABILITY_FAULT = 0x89,
> > +   DRM_PANFROST_EXCEPTION_GPU_CACHEABILITY_FAULT = 0x8a,
> > +   DRM_PANFROST_EXCEPTION_TRANSLATION_FAULT_0 = 0xc0,
> > +   DRM_PANFROST_EXCEPTION_TRANSLATION_FAULT_1 = 0xc1,
> > +   DRM_PANFROST_EXCEPTION_TRANSLATION_FAULT_2 = 0xc2,
> > +   DRM_PANFROST_EXCEPTION_TRANSLATION_FAULT_3 = 0xc3,
> > +   DRM_PANFROST_EXCEPTION_TRANSLATION_FAULT_4 = 0xc4,
> > +   DRM_PANFROST_EXCEPTION_TRANSLATION_FAULT_IDENTITY = 0xc7,
> > +   DRM_PANFROST_EXCEPTION_PERM_FAULT_0 = 0xc8,
> > +   DRM_PANFROST_EXCEPTION_PERM_FAULT_1 = 0xc9,
> > +   DRM_PANFROST_EXCEPTION_PERM_FAULT_2 = 0xca,
> > +   DRM_PANFROST_EXCEPTION_PERM_FAULT_3 = 0xcb,
> > +   DRM_PANFROST_EXCEPTION_TRANSTAB_BUS_FAULT_0 = 0xd0,
> > +   DRM_PANFROST_EXCEPTION_TRANSTAB_BUS_FAULT_1 = 0xd1,
> > +   DRM_PANFROST_EXCEPTION_TRANSTAB_BUS_FAULT_2 = 0xd2,
> > +   DRM_PANFROST_EXCEPTION_TRANSTAB_BUS_FAULT_3 = 0xd3,
> > +   DRM_PANFROST_EXCEPTION_ACCESS_FLAG_0 = 0xd8,
> > +   DRM_PANFROST_EXCEPTION_ACCESS_FLAG_1 = 0xd9,
> > +   DRM_PANFROST_EXCEPTION_ACCESS_FLAG_2 = 0xda,
> > +   DRM_PANFROST_EXCEPTION_ACCESS_FLAG_3 = 0xdb,
> > +   DRM_PANFROST_EXCEPTION_ADDR_SIZE_FAULT_IN0 = 0xe0,
> > +   DRM_PANFROST_EXCEPTION_ADDR_SIZE_FAULT_IN1 = 0xe1,
> > +   DRM_PANFROST_EXCEPTION_ADDR_SIZE_FAULT_IN2 = 0xe2,
> > +   DRM_PANFROST_EXCEPTION_ADDR_SIZE_FAULT_IN3 = 0xe3,
> > +   DRM_PANFROST_EXCEPTION_ADDR_SIZE_FAULT_OUT0 = 0xe4,
> > +   DRM_PANFROST_EXCEPTION_ADDR_SIZE_FAULT_OUT1 = 0xe5,
> > +   DRM_PANFROST_EXCEPTION_ADDR_SIZE_FAULT_OUT2 = 0xe6,
> > +   DRM_PANFROST_EXCEPTION_ADDR_SIZE_FAULT_OUT3 = 0xe7,
> > +   DRM_PANFROST_EXCEPTION_MEM_ATTR_FAULT_0 = 0xe8,
> > +   DRM_PANFROST_EXCEPTION_MEM_ATTR_FAULT_1 = 0xe9,
> > +   DRM_PANFROST_EXCEPTION_MEM_ATTR_FAULT_2 = 0xea,
> > +   DRM_PANFROST_EXCEPTION_MEM_ATTR_FAULT_3 = 0xeb,
> > +   

Re: [PATCH 1/3] drm/nouveau: wait for moving fence after pinning

2021-06-21 Thread Daniel Vetter
On Mon, Jun 21, 2021 at 03:03:26PM +0200, Christian König wrote:
> We actually need to wait for the moving fence after pinning
> the BO to make sure that the pin is completed.
> 
> Signed-off-by: Christian König 
> CC: sta...@kernel.org
> ---
>  drivers/gpu/drm/nouveau/nouveau_prime.c | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c 
> b/drivers/gpu/drm/nouveau/nouveau_prime.c
> index 347488685f74..591738545eba 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> @@ -93,7 +93,13 @@ int nouveau_gem_prime_pin(struct drm_gem_object *obj)
>   if (ret)
>   return -EINVAL;
>  
> - return 0;
> + if (nvbo->bo.moving) {

Don't we need to hold the dma_resv to read this? We can grab a reference
and then unlock, but I think just unlocked wait can go boom pretty easily
(since we don't hold a reference or lock so someone else can jump in and
free the moving fence).
-Daniel

> + ret = dma_fence_wait(nvbo->bo.moving, true);
> + if (ret)
> + nouveau_bo_unpin(nvbo);
> + }
> +
> + return ret;
>  }
>  
>  void nouveau_gem_prime_unpin(struct drm_gem_object *obj)
> -- 
> 2.25.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


Re: [PATCH v2 02/12] drm/panfrost: Get rid of the unused JS_STATUS_EVENT_ACTIVE definition

2021-06-21 Thread Steven Price
On 21/06/2021 15:49, Boris Brezillon wrote:
> On Mon, 21 Jun 2021 15:34:35 +0100
> Steven Price  wrote:
> 
>> On 21/06/2021 14:38, Boris Brezillon wrote:
>>> Exception types will be defined as an enum in panfrost_drm.h so userspace
>>> and use the same definitions if needed.  
>>
>> s/and/can/ ?
>>
>> While it is (currently) unused in the kernel, this is a hardware value
>> so I'm not sure why it's worth removing this and not the other
>> (currently) unused values here. This is the value returned from the
>> JS_STATUS register when the slot is actively processing a job.
> 
> Hm, what's the point of having the same value defined in 2 places
> (DRM_PANFROST_EXCEPTION_ACTIVE defined in patch 3 vs
> JS_STATUS_EVENT_ACTIVE here)? I mean, values defined in the
> drm_panfrost_exception_type enum apply to the JS_STATUS registers too,
> right?

Thinking about this more I guess I agree with you: this is an oddity and
your following patch adds a (more) complete list. You've convinced me -
with my nit above fixed:

Reviewed-by: Steven Price 


Re: [Intel-gfx] [PATCH] drm/i915/eb: Fix pagefault disabling in the first slowpath

2021-06-21 Thread Daniel Vetter
On Mon, Jun 21, 2021 at 04:30:50PM +0200, Maarten Lankhorst wrote:
> Op 21-06-2021 om 11:33 schreef Matthew Auld:
> > On 18/06/2021 22:45, Daniel Vetter wrote:
> >> In
> >>
> >> commit ebc0808fa2da0548a78e715858024cb81cd732bc
> >> Author: Chris Wilson 
> >> Date:   Tue Oct 18 13:02:51 2016 +0100
> >>
> >>  drm/i915: Restrict pagefault disabling to just around copy_from_user()
> >>
> >> we entirely missed that there's a slow path call to eb_relocate_entry
> >> (or i915_gem_execbuffer_relocate_entry as it was called back then)
> >> which was left fully wrapped by pagefault_disable/enable() calls.
> >> Previously any issues with blocking calls where handled by the
> >> following code:
> >>
> >> /* we can't wait for rendering with pagefaults disabled */
> >> if (pagefault_disabled() && !object_is_idle(obj))
> >>     return -EFAULT;
> >>
> >> Now at this point the prefaulting was still around, which means in
> >> normal applications it was very hard to hit this bug. No idea why the
> >> regressions in igts weren't caught.
> >>
> >> Now this all changed big time with 2 patches merged closely together.
> >>
> >> First
> >>
> >> commit 2889caa9232109afc8881f29a2205abeb5709d0c
> >> Author: Chris Wilson 
> >> Date:   Fri Jun 16 15:05:19 2017 +0100
> >>
> >>  drm/i915: Eliminate lots of iterations over the execobjects array
> >>
> >> removes the prefaulting from the first relocation path, pushing it into
> >> the first slowpath (of which this patch added a total of 3 escalation
> >> levels). This would have really quickly uncovered the above bug, were
> >> it not for immediate adding a duct-tape on top with
> >>
> >> commit 7dd4f6729f9243bd7046c6f04c107a456bda38eb
> >> Author: Chris Wilson 
> >> Date:   Fri Jun 16 15:05:24 2017 +0100
> >>
> >>  drm/i915: Async GPU relocation processing
> >>
> >> by pushing all all the relocation patching to the gpu if the buffer
> >> was busy, which avoided all the possible blocking calls.
> >>
> >> The entire slowpath was then furthermore ditched in
> >>
> >> commit 7dc8f1143778a35b190f9413f228b3cf28f67f8d
> >> Author: Chris Wilson 
> >> Date:   Wed Mar 11 16:03:10 2020 +
> >>
> >>  drm/i915/gem: Drop relocation slowpath
> >>
> >> and resurrected in
> >>
> >> commit fd1500fcd4420eee06e2c7f3aa6067b78ac05871
> >> Author: Maarten Lankhorst 
> >> Date:   Wed Aug 19 16:08:43 2020 +0200
> >>
> >>  Revert "drm/i915/gem: Drop relocation slowpath".
> >>
> >> but this did not further impact what's going on.
> >>
> >> Since pagefault_disable/enable is an atomic section, any sleeping in
> >> there is prohibited, and we definitely do that without gpu relocations
> >> since we have to wait for the gpu usage to finish before we can patch
> >> up the relocations.
> >
> > Why do we also need the __copy_from_user_inatomic in eb_relocate_vma()?
> >
> > Reviewed-by: Matthew Auld 
> >
> >>
> >> Signed-off-by: Daniel Vetter 
> >> Cc: Jon Bloomfield 
> >> Cc: Chris Wilson 
> >> Cc: Maarten Lankhorst 
> >> Cc: Joonas Lahtinen 
> >> Cc: Daniel Vetter 
> >> Cc: "Thomas Hellström" 
> >> Cc: Matthew Auld 
> >> Cc: Lionel Landwerlin 
> >> Cc: Dave Airlie 
> >> Cc: Jason Ekstrand 
> >> ---
> >>   drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 2 --
> >>   1 file changed, 2 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
> >> b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >> index 6539b82dda54..7ff2fc3c0b2c 100644
> >> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> >> @@ -2082,9 +2082,7 @@ static noinline int eb_relocate_parse_slow(struct 
> >> i915_execbuffer *eb,
> >>     list_for_each_entry(ev, >relocs, reloc_link) {
> >>   if (!have_copy) {
> >> -    pagefault_disable();
> >>   err = eb_relocate_vma(eb, ev);
> >> -    pagefault_enable();
> >>   if (err)
> >>   break;
> >>   } else {
> >>
> Reviewed-by: Maarten Lankhorst 

Pushed to drm-intel-gt-next, thanks to both of you for taking a look.
-Daniel

> 
> ___
> Intel-gfx mailing list
> intel-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


Re: [PATCH v3 1/2] habanalabs: define uAPI to export FD for DMA-BUF

2021-06-21 Thread Jason Gunthorpe
On Mon, Jun 21, 2021 at 04:20:35PM +0200, Daniel Vetter wrote:

> Also unless we're actually doing this properly there's zero incentive for
> me to review the kernel code and check whether it follows the rules
> correctly, so you have excellent chances that you just break the rules.
> And dma_buf/fence are tricky enough that you pretty much guaranteed to
> break the rules if you're not involved in the discussions. Just now we
> have a big one where everyone involved (who's been doing this for 10+
> years all at least) realizes we've fucked up big time.

This is where I come from on dmabuf, it is fiendishly
complicated. Don't use it unless you absoultely have to, are in DRM,
and have people like Daniel helping to make sure you use it right.

It's whole premise and design is compromised by specialty historical
implementation choices on the GPU side.

Jason


  1   2   3   >