Initial bits to prevent priority changing of cyclic scheduler tasks by only allow them to be SCHED_FIFO. Fairly hacky at this time and will need revisiting because of the security concerns.
Affects task death handling since it uses an additional scheduler class hook for clean up at death. Must be SCHED_FIFO. Signed-off-by: Bill Huey (hui) <bill.h...@gmail.com> --- kernel/sched/core.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 44db0ff..cf6cf57 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -87,6 +87,10 @@ #include "../workqueue_internal.h" #include "../smpboot.h" +#ifdef CONFIG_RTC_CYCLIC +#include "cyclic.h" +#endif + #define CREATE_TRACE_POINTS #include <trace/events/sched.h> @@ -2074,6 +2078,10 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p) memset(&p->se.statistics, 0, sizeof(p->se.statistics)); #endif +#ifdef CONFIG_RTC_CYCLIC + RB_CLEAR_NODE(&p->rt.rt_overrun.node); +#endif + RB_CLEAR_NODE(&p->dl.rb_node); init_dl_task_timer(&p->dl); __dl_clear_params(p); @@ -3881,6 +3889,11 @@ recheck: if (dl_policy(policy)) return -EPERM; +#ifdef CONFIG_RTC_CYCLIC + if (rt_overrun_policy(p, policy)) + return -EPERM; +#endif + /* * Treat SCHED_IDLE as nice 20. Only allow a switch to * SCHED_NORMAL if the RLIMIT_NICE would normally permit it. -- 2.5.0