On Wed, Oct 17, 2018 at 12:20:42PM +0100, Mel Gorman wrote: > On Wed, Oct 17, 2018 at 02:33:28PM +0800, Aaron Lu wrote: > > Profile on Intel Skylake server shows the most time consuming part > > under zone->lock on allocation path is accessing those to-be-returned > > page's "struct page" on the free_list inside zone->lock. One explanation > > is, different CPUs are releasing pages to the head of free_list and > > those page's 'struct page' may very well be cache cold for the allocating > > CPU when it grabs these pages from free_list' head. The purpose here > > is to avoid touching these pages one by one inside zone->lock. > > > > I didn't read this one in depth because it's somewhat ortogonal to the > lazy buddy merging which I think would benefit from being finalised and > ensuring that there are no reductions in high-order allocation success > rates. Pages being allocated on one CPU and freed on another is not that > unusual -- ping-pong workloads or things like netperf used to exhibit > this sort of pattern. > > However, this part stuck out > > > +static inline void zone_wait_cluster_alloc(struct zone *zone) > > +{ > > + while (atomic_read(&zone->cluster.in_progress)) > > + cpu_relax(); > > +} > > + > > RT has had problems with cpu_relax in the past but more importantly, as > this delay for parallel compactions and allocations of contig ranges, > we could be stuck here for very long periods of time with interrupts
The longest possible time is one CPU accessing pcp->batch number cold cachelines. Reason: When zone_wait_cluster_alloc() is called, we already held zone lock so no more allocations are possible. Waiting in_progress to become zero means waiting any CPU that increased in_progress to finish processing their allocated pages. Since they will at most allocate pcp->batch pages and worse case are all these page structres are cache cold, so the longest wait time is one CPU accessing pcp->batch number cold cache lines. I have no idea if this time is too long though. > disabled. It gets even worse if it's from an interrupt context such as > jumbo frame allocation or a high-order slab allocation that is atomic. My understanding is atomic allocation won't trigger compaction, no? > These potentially large periods of time with interrupts disabled is very > hazardous. I see and agree, thanks for pointing this out. Hopefully, the above mentioned worst case time won't be regarded as unbound or too long. > It may be necessary to consider instead minimising the number > of struct page update when merging to PCP and then either increasing the > size of the PCP or allowing it to exceed pcp->high for short periods of > time to batch the struct page updates. I don't quite follow this part. It doesn't seem possible we can exceed pcp->high in allocation path, or are you talking about free path? And thanks a lot for the review!