Re: [RFC] drm/panic: Add drm panic locking

2024-03-14 Thread Daniel Vetter
On Fri, Mar 01, 2024 at 02:03:12PM +0100, Jocelyn Falempe wrote:
> Thanks for the patch.
> 
> I think it misses to initialize the lock, so we need to add a
> raw_spin_lock_init() in the drm device initialization.
> 
> Also I'm wondering if it make sense to put that under the CONFIG_DRM_PANIC
> flag, so that if you don't enable it, panic_lock() and panic_unlock() would
> be no-op.
> But that may not work if the driver uses this lock to protect some register
> access.

If we get drivers to use this for some of their own locking we have to
keep it enabled unconditionally. Also I think locking that's only
conditional on Kconfig is just a bit too suprising to be a good idea
irrespective of this specific case.
-Sima

> 
> Best regards,
> 
> -- 
> 
> Jocelyn
> 
> On 01/03/2024 11:39, Daniel Vetter wrote:
> > Rough sketch for the locking of drm panic printing code. The upshot of
> > this approach is that we can pretty much entirely rely on the atomic
> > commit flow, with the pair of raw_spin_lock/unlock providing any
> > barriers we need, without having to create really big critical
> > sections in code.
> > 
> > This also avoids the need that drivers must explicitly update the
> > panic handler state, which they might forget to do, or not do
> > consistently, and then we blow up in the worst possible times.
> > 
> > It is somewhat racy against a concurrent atomic update, and we might
> > write into a buffer which the hardware will never display. But there's
> > fundamentally no way to avoid that - if we do the panic state update
> > explicitly after writing to the hardware, we might instead write to an
> > old buffer that the user will barely ever see.
> > 
> > Note that an rcu protected deference of plane->state would give us the
> > the same guarantees, but it has the downside that we then need to
> > protect the plane state freeing functions with call_rcu too. Which
> > would very widely impact a lot of code and therefore doesn't seem
> > worth the complexity compared to a raw spinlock with very tiny
> > critical sections. Plus rcu cannot be used to protect access to
> > peek/poke registers anyway, so we'd still need it for those cases.
> > 
> > Peek/poke registers for vram access (or a gart pte reserved just for
> > panic code) are also the reason I've gone with a per-device and not
> > per-plane spinlock, since usually these things are global for the
> > entire display. Going with per-plane locks would mean drivers for such
> > hardware would need additional locks, which we don't want, since it
> > deviates from the per-console takeoverlocks design.
> > 
> > Longer term it might be useful if the panic notifiers grow a bit more
> > structure than just the absolute bare
> > EXPORT_SYMBOL(panic_notifier_list) - somewhat aside, why is that not
> > EXPORT_SYMBOL_GPL ... If panic notifiers would be more like console
> > drivers with proper register/unregister interfaces we could perhaps
> > reuse the very fancy console lock with all it's check and takeover
> > semantics that John Ogness is developing to fix the console_lock mess.
> > But for the initial cut of a drm panic printing support I don't think
> > we need that, because the critical sections are extremely small and
> > only happen once per display refresh. So generally just 60 tiny locked
> > sections per second, which is nothing compared to a serial console
> > running a 115kbaud doing really slow mmio writes for each byte. So for
> > now the raw spintrylock in drm panic notifier callback should be good
> > enough.
> > 
> > Another benefit of making panic notifiers more like full blown
> > consoles (that are used in panics only) would be that we get the two
> > stage design, where first all the safe outputs are used. And then the
> > dangerous takeover tricks are deployed (where for display drivers we
> > also might try to intercept any in-flight display buffer flips, which
> > if we race and misprogram fifos and watermarks can hang the memory
> > controller on some hw).
> > 
> > For context the actual implementation on the drm side is by Jocelyn
> > and this patch is meant to be combined with the overall approach in
> > v7 (v8 is a bit less flexible, which I think is the wrong direction):
> > 
> > https://lore.kernel.org/dri-devel/20240104160301.185915-1-jfale...@redhat.com/
> > 
> > Note that the locking is very much not correct there, hence this
> > separate rfc.
> > 
> > v2:
> > - fix authorship, this was all my typing
> > - some typo oopsies
> > - link to the drm panic work by Jocelyn for context
> > 
> > Signed-off-by: Daniel Vetter 
> > Cc: Jocelyn Falempe 
> > Cc: Andrew Morton 
> > Cc: "Peter Zijlstra (Intel)" 
> > Cc: Lukas Wunner 
> > Cc: Petr Mladek 
> > Cc: Steven Rostedt 
> > Cc: John Ogness 
> > Cc: Sergey Senozhatsky 
> > Cc: Maarten Lankhorst 
> > Cc: Maxime Ripard 
> > Cc: Thomas Zimmermann 
> > Cc: David Airlie 
> > Cc: Daniel Vetter 
> > ---
> >   drivers/gpu/drm/drm_atomic_helper.c |  3 +
> >   include/drm/drm_mode_config.h   | 

Re: [RFC] drm/panic: Add drm panic locking

2024-03-14 Thread Daniel Vetter
On Tue, Mar 05, 2024 at 09:20:04AM +0106, John Ogness wrote:
> Hi Daniel,
> 
> Great to see this moving forward!
> 
> On 2024-03-01, Daniel Vetter  wrote:
> > But for the initial cut of a drm panic printing support I don't think
> > we need that, because the critical sections are extremely small and
> > only happen once per display refresh. So generally just 60 tiny locked
> > sections per second, which is nothing compared to a serial console
> > running a 115kbaud doing really slow mmio writes for each byte. So for
> > now the raw spintrylock in drm panic notifier callback should be good
> > enough.
> 
> Is there a reason you do not use the irqsave/irqrestore variants? By
> leaving interrupts enabled, there is the risk that a panic from any
> interrupt handler may block the drm panic handler.

tbh I simply did not consider that could be useful. but yeah if we're
unlucky and an interrupt happens in here and dies, the drm panic handler
cannot run. And this code is definitely not hot enough to matter, the
usual driver code for a plane flip does a few more irqsafe spinlocks on
top. One more doesn't add anything I think, and I guess if it does we'll
notice :-)

Also irqsave makes drm_panic_lock/unlock a bit more widely useful to
protect driver mmio access since then it also works from irq handlers.
Means we have to pass irqflags around, but that sounds acceptable. So very
much has my vote.
-Sima
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


Re: [RFC] drm/panic: Add drm panic locking

2024-03-05 Thread John Ogness
Hi Daniel,

Great to see this moving forward!

On 2024-03-01, Daniel Vetter  wrote:
> But for the initial cut of a drm panic printing support I don't think
> we need that, because the critical sections are extremely small and
> only happen once per display refresh. So generally just 60 tiny locked
> sections per second, which is nothing compared to a serial console
> running a 115kbaud doing really slow mmio writes for each byte. So for
> now the raw spintrylock in drm panic notifier callback should be good
> enough.

Is there a reason you do not use the irqsave/irqrestore variants? By
leaving interrupts enabled, there is the risk that a panic from any
interrupt handler may block the drm panic handler.

John Ogness


Re: [RFC] drm/panic: Add drm panic locking

2024-03-01 Thread Jocelyn Falempe

Thanks for the patch.

I think it misses to initialize the lock, so we need to add a 
raw_spin_lock_init() in the drm device initialization.


Also I'm wondering if it make sense to put that under the 
CONFIG_DRM_PANIC flag, so that if you don't enable it, panic_lock() and 
panic_unlock() would be no-op.
But that may not work if the driver uses this lock to protect some 
register access.


Best regards,

--

Jocelyn

On 01/03/2024 11:39, Daniel Vetter wrote:

Rough sketch for the locking of drm panic printing code. The upshot of
this approach is that we can pretty much entirely rely on the atomic
commit flow, with the pair of raw_spin_lock/unlock providing any
barriers we need, without having to create really big critical
sections in code.

This also avoids the need that drivers must explicitly update the
panic handler state, which they might forget to do, or not do
consistently, and then we blow up in the worst possible times.

It is somewhat racy against a concurrent atomic update, and we might
write into a buffer which the hardware will never display. But there's
fundamentally no way to avoid that - if we do the panic state update
explicitly after writing to the hardware, we might instead write to an
old buffer that the user will barely ever see.

Note that an rcu protected deference of plane->state would give us the
the same guarantees, but it has the downside that we then need to
protect the plane state freeing functions with call_rcu too. Which
would very widely impact a lot of code and therefore doesn't seem
worth the complexity compared to a raw spinlock with very tiny
critical sections. Plus rcu cannot be used to protect access to
peek/poke registers anyway, so we'd still need it for those cases.

Peek/poke registers for vram access (or a gart pte reserved just for
panic code) are also the reason I've gone with a per-device and not
per-plane spinlock, since usually these things are global for the
entire display. Going with per-plane locks would mean drivers for such
hardware would need additional locks, which we don't want, since it
deviates from the per-console takeoverlocks design.

Longer term it might be useful if the panic notifiers grow a bit more
structure than just the absolute bare
EXPORT_SYMBOL(panic_notifier_list) - somewhat aside, why is that not
EXPORT_SYMBOL_GPL ... If panic notifiers would be more like console
drivers with proper register/unregister interfaces we could perhaps
reuse the very fancy console lock with all it's check and takeover
semantics that John Ogness is developing to fix the console_lock mess.
But for the initial cut of a drm panic printing support I don't think
we need that, because the critical sections are extremely small and
only happen once per display refresh. So generally just 60 tiny locked
sections per second, which is nothing compared to a serial console
running a 115kbaud doing really slow mmio writes for each byte. So for
now the raw spintrylock in drm panic notifier callback should be good
enough.

Another benefit of making panic notifiers more like full blown
consoles (that are used in panics only) would be that we get the two
stage design, where first all the safe outputs are used. And then the
dangerous takeover tricks are deployed (where for display drivers we
also might try to intercept any in-flight display buffer flips, which
if we race and misprogram fifos and watermarks can hang the memory
controller on some hw).

For context the actual implementation on the drm side is by Jocelyn
and this patch is meant to be combined with the overall approach in
v7 (v8 is a bit less flexible, which I think is the wrong direction):

https://lore.kernel.org/dri-devel/20240104160301.185915-1-jfale...@redhat.com/

Note that the locking is very much not correct there, hence this
separate rfc.

v2:
- fix authorship, this was all my typing
- some typo oopsies
- link to the drm panic work by Jocelyn for context

Signed-off-by: Daniel Vetter 
Cc: Jocelyn Falempe 
Cc: Andrew Morton 
Cc: "Peter Zijlstra (Intel)" 
Cc: Lukas Wunner 
Cc: Petr Mladek 
Cc: Steven Rostedt 
Cc: John Ogness 
Cc: Sergey Senozhatsky 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Thomas Zimmermann 
Cc: David Airlie 
Cc: Daniel Vetter 
---
  drivers/gpu/drm/drm_atomic_helper.c |  3 +
  include/drm/drm_mode_config.h   | 10 +++
  include/drm/drm_panic.h | 99 +
  3 files changed, 112 insertions(+)
  create mode 100644 include/drm/drm_panic.h

diff --git a/drivers/gpu/drm/drm_atomic_helper.c 
b/drivers/gpu/drm/drm_atomic_helper.c
index 40c2bd3e62e8..5a908c186037 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -38,6 +38,7 @@
  #include 
  #include 
  #include 
+#include 
  #include 
  #include 
  #include 
@@ -3086,6 +3087,7 @@ int drm_atomic_helper_swap_state(struct drm_atomic_state 
*state,
}
}
  
+	drm_panic_lock(state->dev);

for_each_oldnew_plane_in_state(state, plane, 

[RFC] drm/panic: Add drm panic locking

2024-03-01 Thread Daniel Vetter
Rough sketch for the locking of drm panic printing code. The upshot of
this approach is that we can pretty much entirely rely on the atomic
commit flow, with the pair of raw_spin_lock/unlock providing any
barriers we need, without having to create really big critical
sections in code.

This also avoids the need that drivers must explicitly update the
panic handler state, which they might forget to do, or not do
consistently, and then we blow up in the worst possible times.

It is somewhat racy against a concurrent atomic update, and we might
write into a buffer which the hardware will never display. But there's
fundamentally no way to avoid that - if we do the panic state update
explicitly after writing to the hardware, we might instead write to an
old buffer that the user will barely ever see.

Note that an rcu protected deference of plane->state would give us the
the same guarantees, but it has the downside that we then need to
protect the plane state freeing functions with call_rcu too. Which
would very widely impact a lot of code and therefore doesn't seem
worth the complexity compared to a raw spinlock with very tiny
critical sections. Plus rcu cannot be used to protect access to
peek/poke registers anyway, so we'd still need it for those cases.

Peek/poke registers for vram access (or a gart pte reserved just for
panic code) are also the reason I've gone with a per-device and not
per-plane spinlock, since usually these things are global for the
entire display. Going with per-plane locks would mean drivers for such
hardware would need additional locks, which we don't want, since it
deviates from the per-console takeoverlocks design.

Longer term it might be useful if the panic notifiers grow a bit more
structure than just the absolute bare
EXPORT_SYMBOL(panic_notifier_list) - somewhat aside, why is that not
EXPORT_SYMBOL_GPL ... If panic notifiers would be more like console
drivers with proper register/unregister interfaces we could perhaps
reuse the very fancy console lock with all it's check and takeover
semantics that John Ogness is developing to fix the console_lock mess.
But for the initial cut of a drm panic printing support I don't think
we need that, because the critical sections are extremely small and
only happen once per display refresh. So generally just 60 tiny locked
sections per second, which is nothing compared to a serial console
running a 115kbaud doing really slow mmio writes for each byte. So for
now the raw spintrylock in drm panic notifier callback should be good
enough.

Another benefit of making panic notifiers more like full blown
consoles (that are used in panics only) would be that we get the two
stage design, where first all the safe outputs are used. And then the
dangerous takeover tricks are deployed (where for display drivers we
also might try to intercept any in-flight display buffer flips, which
if we race and misprogram fifos and watermarks can hang the memory
controller on some hw).

For context the actual implementation on the drm side is by Jocelyn
and this patch is meant to be combined with the overall approach in
v7 (v8 is a bit less flexible, which I think is the wrong direction):

https://lore.kernel.org/dri-devel/20240104160301.185915-1-jfale...@redhat.com/

Note that the locking is very much not correct there, hence this
separate rfc.

v2:
- fix authorship, this was all my typing
- some typo oopsies
- link to the drm panic work by Jocelyn for context

Signed-off-by: Daniel Vetter 
Cc: Jocelyn Falempe 
Cc: Andrew Morton 
Cc: "Peter Zijlstra (Intel)" 
Cc: Lukas Wunner 
Cc: Petr Mladek 
Cc: Steven Rostedt 
Cc: John Ogness 
Cc: Sergey Senozhatsky 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Thomas Zimmermann 
Cc: David Airlie 
Cc: Daniel Vetter 
---
 drivers/gpu/drm/drm_atomic_helper.c |  3 +
 include/drm/drm_mode_config.h   | 10 +++
 include/drm/drm_panic.h | 99 +
 3 files changed, 112 insertions(+)
 create mode 100644 include/drm/drm_panic.h

diff --git a/drivers/gpu/drm/drm_atomic_helper.c 
b/drivers/gpu/drm/drm_atomic_helper.c
index 40c2bd3e62e8..5a908c186037 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -38,6 +38,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -3086,6 +3087,7 @@ int drm_atomic_helper_swap_state(struct drm_atomic_state 
*state,
}
}
 
+   drm_panic_lock(state->dev);
for_each_oldnew_plane_in_state(state, plane, old_plane_state, 
new_plane_state, i) {
WARN_ON(plane->state != old_plane_state);
 
@@ -3095,6 +3097,7 @@ int drm_atomic_helper_swap_state(struct drm_atomic_state 
*state,
state->planes[i].state = old_plane_state;
plane->state = new_plane_state;
}
+   drm_panic_unlock(state->dev);
 
for_each_oldnew_private_obj_in_state(state, obj, old_obj_state, 
new_obj_state, i) {
WARN_ON(obj->state != 

[RFC] drm/panic: Add drm panic locking

2024-03-01 Thread Daniel Vetter
From: Jocelyn Falempe 

Rough sketch for the locking of drm panic printing code. The upshot of
this approach is that we can pretty much entirely rely on the atomic
commit flow, with the pair of raw_spin_lock/unlock providing any
barriers we need, without having to create really big critical
sections in code.

This also avoids the need that drivers must explicitly update the
panic handler state, which they might forget to do, or not do
consistently, and then we blow up in the worst possible times.

It is somewhat racy against a concurrent atomic update, and we might
write into a buffer which the hardware will never display. But there's
fundamentally no way to avoid that - if we do the panic state update
explicitly after writing to the hardware, we might instead write to an
old buffer that the user will barely ever see.

Note that an rcu protected deference of plane->state would give us the
the same guarantees, but it has the downside that we then need to
protect the plane state freeing functions with call_rcu too. Which
would very widely impact a lot of code and therefore doesn't seem
worth the it compared to a raw spinlock with very tiny critical
sections. Plus rcu cannot be used to protect access to peek/poke registers
anyway, so we'd still need it for those cases.

Peek/poke registers for vram access (or a gart pte reserved just for
panic code) are also the reason I've gone with a per-device and not
per-plane spinlock, since usually these things are global for the
entire display. Going with per-plane locks would mean drivers for such
hardware would need additional locks, which we don't want, since it
deviates from the per-console takeoverlocks design.

Longer term it might be useful if the panic notifiers grow a bit more
structure than just the absolute bare
EXPORT_SYMBOL(panic_notifier_list) - somewhat aside, why is that not
EXPORT_SYMBOL_GPL ... If panic notifiers would be more like console
drivers with proper register/unregister interfaces we could perhaps
reuse the very fancy console lock with all it's check and takeover
semantics that John Ogness is developing to fix the console_lock mess.
But for the initial cut of a drm panic printing support I don't think
we need that, because the critical sections are extremely small and
only happen once per display refresh. So generally just 60 tiny locked
sections per second, which is nothing compared to a serial console
running a 115kbaud doing really slow mmio writes for each byte. So for
now the raw spintrylock in drm panic notifier callback should be good
enough.

Another benefit of making panic notifiers more like full blown
consoles (that are used in panics only) would be that we get the two
stage design, where first all the safe outputs are used. And then the
dangerous takeover tricks are deployed (where for display drivers we
also might try to intercept any in-flight display buffer flips, which
if we race and misprogram fifos and watermarks can hang the memory
controller on some hw).

Signed-off-by: Daniel Vetter 
Cc: Jocelyn Falempe 
Cc: Andrew Morton 
Cc: "Peter Zijlstra (Intel)" 
Cc: Lukas Wunner 
Cc: Petr Mladek 
Cc: Steven Rostedt 
Cc: John Ogness 
Cc: Sergey Senozhatsky 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Thomas Zimmermann 
Cc: David Airlie 
Cc: Daniel Vetter 
---
 drivers/gpu/drm/drm_atomic_helper.c |  3 +
 include/drm/drm_mode_config.h   | 10 +++
 include/drm/drm_panic.h | 99 +
 3 files changed, 112 insertions(+)
 create mode 100644 include/drm/drm_panic.h

diff --git a/drivers/gpu/drm/drm_atomic_helper.c 
b/drivers/gpu/drm/drm_atomic_helper.c
index 40c2bd3e62e8..5a908c186037 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -38,6 +38,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -3086,6 +3087,7 @@ int drm_atomic_helper_swap_state(struct drm_atomic_state 
*state,
}
}
 
+   drm_panic_lock(state->dev);
for_each_oldnew_plane_in_state(state, plane, old_plane_state, 
new_plane_state, i) {
WARN_ON(plane->state != old_plane_state);
 
@@ -3095,6 +3097,7 @@ int drm_atomic_helper_swap_state(struct drm_atomic_state 
*state,
state->planes[i].state = old_plane_state;
plane->state = new_plane_state;
}
+   drm_panic_unlock(state->dev);
 
for_each_oldnew_private_obj_in_state(state, obj, old_obj_state, 
new_obj_state, i) {
WARN_ON(obj->state != old_obj_state);
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index 973119a9176b..92a390379e85 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -505,6 +505,16 @@ struct drm_mode_config {
 */
struct list_head plane_list;
 
+   /**
+* @panic_lock:
+*
+* Raw spinlock used to protect critical sections of code that access
+* the display hardware or modeset