Avoid multiplication (imul) operations when accessing:
 zone->free_area[order].nr_free

This was really tricky to find. I was puzzled why perf reported that
rmqueue_bulk was using 44% of the time in an imul operation:

       │     del_page_from_free_list():
 44,54 │ e2:   imul   $0x58,%rax,%rax

This operation was generated (by compiler) because the struct free_area have
size 88 bytes or 0x58 hex. The compiler cannot find a shift operation to use
and instead choose to use a more expensive imul, to find the offset into the
array free_area[].

The patch align struct free_area to a cache-line, which cause the
compiler avoid the imul operation. The imul operation is very fast on
modern Intel CPUs. To help fast-path that decrement 'nr_free' move the
member 'nr_free' to be first element, which saves one 'add' operation.

Looking up instruction latency this exchange a 3-cycle imul with a
1-cycle shl, saving 2-cycles. It does trade some space to do this.

Used: gcc (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)

Signed-off-by: Jesper Dangaard Brouer <bro...@redhat.com>
---
 include/linux/mmzone.h |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index b593316bff3d..4d83201717e1 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -93,10 +93,12 @@ extern int page_group_by_mobility_disabled;
 #define get_pageblock_migratetype(page)                                        
\
        get_pfnblock_flags_mask(page, page_to_pfn(page), MIGRATETYPE_MASK)
 
+/* Aligned struct to make zone->free_area[order] access faster */
 struct free_area {
-       struct list_head        free_list[MIGRATE_TYPES];
        unsigned long           nr_free;
-};
+       unsigned long           __pad_to_align_free_list;
+       struct list_head        free_list[MIGRATE_TYPES];
+}  ____cacheline_aligned_in_smp;
 
 static inline struct page *get_page_from_free_area(struct free_area *area,
                                            int migratetype)


Reply via email to