Commit-ID:  1a1d48a4a8fde49aedc045d894efe67173d59fe0
Gitweb:     http://git.kernel.org/tip/1a1d48a4a8fde49aedc045d894efe67173d59fe0
Author:     Denys Vlasenko <[email protected]>
AuthorDate: Tue, 4 Aug 2015 16:15:14 +0200
Committer:  Ingo Molnar <[email protected]>
CommitDate: Wed, 5 Aug 2015 09:38:08 +0200

linux/bitmap: Force inlining of bitmap weight functions

With this config:

  http://busybox.net/~vda/kernel_config_OPTIMIZE_INLINING_and_Os

gcc-4.7.2 generates many copies of these tiny functions:

        bitmap_weight (55 copies):
        55                      push   %rbp
        48 89 e5                mov    %rsp,%rbp
        e8 3f 3a 8b 00          callq  __bitmap_weight
        5d                      pop    %rbp
        c3                      retq

        hweight_long (23 copies):
        55                      push   %rbp
        e8 b5 65 8e 00          callq  __sw_hweight64
        48 89 e5                mov    %rsp,%rbp
        5d                      pop    %rbp
        c3                      retq

See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66122

This patch fixes this via s/inline/__always_inline/

While at it, replaced two "__inline__" with usual "inline"
(the rest of the source file uses the latter).

            text     data      bss       dec  filename
        86971357 17195880 36659200 140826437  vmlinux.before
        86971120 17195912 36659200 140826232  vmlinux

Signed-off-by: Denys Vlasenko <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Thomas Graf <[email protected]>
Cc: [email protected]
Link: 
http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
 include/linux/bitmap.h | 2 +-
 include/linux/bitops.h | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h
index ea17cca..9653fdb 100644
--- a/include/linux/bitmap.h
+++ b/include/linux/bitmap.h
@@ -295,7 +295,7 @@ static inline int bitmap_full(const unsigned long *src, 
unsigned int nbits)
        return find_first_zero_bit(src, nbits) == nbits;
 }
 
-static inline int bitmap_weight(const unsigned long *src, unsigned int nbits)
+static __always_inline int bitmap_weight(const unsigned long *src, unsigned 
int nbits)
 {
        if (small_const_nbits(nbits))
                return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits));
diff --git a/include/linux/bitops.h b/include/linux/bitops.h
index 297f5bd..e635533 100644
--- a/include/linux/bitops.h
+++ b/include/linux/bitops.h
@@ -57,7 +57,7 @@ extern unsigned long __sw_hweight64(__u64 w);
             (bit) < (size);                                    \
             (bit) = find_next_zero_bit((addr), (size), (bit) + 1))
 
-static __inline__ int get_bitmask_order(unsigned int count)
+static inline int get_bitmask_order(unsigned int count)
 {
        int order;
 
@@ -65,7 +65,7 @@ static __inline__ int get_bitmask_order(unsigned int count)
        return order;   /* We could be slightly more clever with -1 here... */
 }
 
-static __inline__ int get_count_order(unsigned int count)
+static inline int get_count_order(unsigned int count)
 {
        int order;
 
@@ -75,7 +75,7 @@ static __inline__ int get_count_order(unsigned int count)
        return order;
 }
 
-static inline unsigned long hweight_long(unsigned long w)
+static __always_inline unsigned long hweight_long(unsigned long w)
 {
        return sizeof(w) == 4 ? hweight32(w) : hweight64(w);
 }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to