From: Philippe Gerum <r...@xenomai.org>

Since kernel v5.8, __vmalloc() does not take protection bits as
PROT_KERNEL is now wired in. Therefore we cannot disable the cache for
the UMM segment via the allocation call directly anymore.

This said, we don't support any CPU architecture exhibiting cache
aliasing braindamage anymore either (was armv4/v5), so let's convert
to the new __vmalloc() call format without bothering for cache
settings.

Signed-off-by: Philippe Gerum <r...@xenomai.org>
---
 kernel/cobalt/include/asm-generic/xenomai/wrappers.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/kernel/cobalt/include/asm-generic/xenomai/wrappers.h 
b/kernel/cobalt/include/asm-generic/xenomai/wrappers.h
index 930e6364e5..652a04759f 100644
--- a/kernel/cobalt/include/asm-generic/xenomai/wrappers.h
+++ b/kernel/cobalt/include/asm-generic/xenomai/wrappers.h
@@ -210,4 +210,10 @@ devm_hwmon_device_register_with_groups(struct device *dev, 
const char *name,
 #define old_timeval32     compat_timeval
 #endif
 
+#if LINUX_VERSION_CODE < KERNEL_VERSION(5,8,0)
+#define vmalloc_kernel(__size, __flags)        __vmalloc(__size, 
GFP_KERNEL|__flags, PAGE_KERNEL)
+#else
+#define vmalloc_kernel(__size, __flags)        __vmalloc(__size, 
GFP_KERNEL|__flags)
+#endif
+
 #endif /* _COBALT_ASM_GENERIC_WRAPPERS_H */
-- 
2.26.2


Reply via email to