A recent innocuous internal optimization change in LLVM [1] causes the
same issue that necessitated commit 660942f2441d ("drm: omapdrm: reduce
clang stack usage") to occur in dispc_runtime_suspend() from inlinling
dispc_save_context().
drivers/gpu/drm/omapdrm/dss/dispc.c:4720:27: error: stack frame size (2272)
exceeds limit (2048) in 'dispc_runtime_suspend' [-Werror,-Wframe-larger-than]
4720 | static __maybe_unused int dispc_runtime_suspend(struct device *dev)
| ^
There is an unfortunate interaction between the inner loops of
dispc_save_context() getting unrolled and the calculation of the index
into the ctx array being spilled to the stack when sanitizers are
enabled [2].
While this should obviously be addressed on the LLVM side, such a fix
may not be easy to craft and it is simple enough to work around the
issue in the same manner as before by marking dispc_save_context() with
noinline_for_stack, which makes it use the same amount of stack as
dispc_restore_context() does after the same change.
Cc: [email protected]
Link:
https://github.com/llvm/llvm-project/commit/055bfc027141bbfafd51fb43f5ab81ba3b480649
[1]
Link: https://llvm.org/pr143908 [2]
Signed-off-by: Nathan Chancellor <[email protected]>
---
drivers/gpu/drm/omapdrm/dss/dispc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/omapdrm/dss/dispc.c
b/drivers/gpu/drm/omapdrm/dss/dispc.c
index
cf055815077cffad554a4ae58cfd7b81edcbb0d4..d079f557c8f24d1afd0bc182edd13165cb9c356c
100644
--- a/drivers/gpu/drm/omapdrm/dss/dispc.c
+++ b/drivers/gpu/drm/omapdrm/dss/dispc.c
@@ -417,7 +417,7 @@ static bool dispc_has_feature(struct dispc_device *dispc,
#define RR(dispc, reg) \
dispc_write_reg(dispc, DISPC_##reg, dispc->ctx[DISPC_##reg /
sizeof(u32)])
-static void dispc_save_context(struct dispc_device *dispc)
+static noinline_for_stack void dispc_save_context(struct dispc_device *dispc)
{
int i, j;
---
base-commit: 76eeb9b8de9880ca38696b2fb56ac45ac0a25c6c
change-id: 20250911-omapdrm-reduce-clang-stack-usage-pt-2-9a9ae9263b91
Best regards,
--
Nathan Chancellor <[email protected]>