From: Andi Kleen <a...@linux.intel.com>

This is a very complex function, which is called in multiple places.
It is unlikely that inlining or not inlining it makes any difference
for its run time.

This saves around 13k text in my kernel

   text    data     bss     dec     hex filename
9083992 5367600 11116544        25568136        1862388 vmlinux-before-load-avg
9070166 5367600 11116544        25554310        185ed86 vmlinux-load-avg

Cc: pet...@infradead.org
Signed-off-by: Andi Kleen <a...@linux.intel.com>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index dea138964b91..78ace89cd481 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2848,7 +2848,7 @@ static u32 __compute_runnable_contrib(u64 n)
  *   load_avg = u_0` + y*(u_0 + u_1*y + u_2*y^2 + ... )
  *            = u_0 + u_1*y + u_2*y^2 + ... [re-labeling u_i --> u_{i+1}]
  */
-static __always_inline int
+static int
 __update_load_avg(u64 now, int cpu, struct sched_avg *sa,
                  unsigned long weight, int running, struct cfs_rq *cfs_rq)
 {
-- 
2.9.3

Reply via email to