On 2017/2/15 18:47, Vlastimil Babka wrote: > On 02/14/2017 11:07 AM, Xishi Qiu wrote: >> On 2017/2/11 1:23, Vlastimil Babka wrote: >> >>> When stealing pages from pageblock of a different migratetype, we count how >>> many free pages were stolen, and change the pageblock's migratetype if more >>> than half of the pageblock was free. This might be too conservative, as >>> there >>> might be other pages that are not free, but were allocated with the same >>> migratetype as our allocation requested. >>> >>> While we cannot determine the migratetype of allocated pages precisely (at >>> least without the page_owner functionality enabled), we can count pages that >>> compaction would try to isolate for migration - those are either on LRU or >>> __PageMovable(). The rest can be assumed to be MIGRATE_RECLAIMABLE or >>> MIGRATE_UNMOVABLE, which we cannot easily distinguish. This counting can be >>> done as part of free page stealing with little additional overhead. >>> >>> The page stealing code is changed so that it considers free pages plus pages >>> of the "good" migratetype for the decision whether to change pageblock's >>> migratetype. >>> >>> The result should be more accurate migratetype of pageblocks wrt the actual >>> pages in the pageblocks, when stealing from semi-occupied pageblocks. This >>> should help the efficiency of page grouping by mobility. >>> >>> Signed-off-by: Vlastimil Babka <vba...@suse.cz> >> >> Hi Vlastimil, >> >> How about these two changes? >> >> 1. If we steal some free pages, we will add these page at the head of >> start_migratetype >> list, it will cause more fixed, because these pages will be allocated more >> easily. > > What do you mean by "more fixed" here? > >> So how about use list_move_tail instead of list_move? > > Hmm, not sure if it can make any difference. We steal because the lists > are currently empty (at least for the order we want), so it shouldn't > matter if we add to head or tail. >
Hi Vlastimil, Please see the following case, I am not sure if it is right. MIGRATE_MOVABLE order: 0 1 2 3 4 5 6 7 8 9 10 free num: 1 1 1 1 1 1 1 1 1 1 0 // one page(e.g. page A) was allocated before MIGRATE_UNMOVABLE order: 0 1 2 3 4 5 6 7 8 9 10 free num: x x x x 0 0 0 0 0 0 0 // we want order=4, so steal from MIGRATE_MOVABLE We alloc order=4 in MIGRATE_UNMOVABLE, then it will fallback to steal pages from MIGRATE_MOVABLE, and we will move free pages form MIGRATE_MOVABLE list to MIGRATE_UNMOVABLE list. List of order 4-9 in MIGRATE_UNMOVABLE is empty, so add head or tail is the same. But order 0-3 is not empty, so if we add to the head, we will allocate pages which stolen from MIGRATE_MOVABLE first later. So we will have less chance to make a large block(order=10) when the one page(page A) free again. Also we will split order=9 which from MIGRATE_MOVABLE to alloc order=4 in expand(), so if we add to the head, we will allocate pages which split from order=9 first later. So we will have less chance to make a large block(order=9) when the order=4 page free again. >> __rmqueue_fallback >> steal_suitable_fallback >> move_freepages_block >> move_freepages >> list_move >> >> 2. When doing expand() - list_add(), usually the list is empty, but in the >> following case, the list is not empty, because we did move_freepages_block() >> before. >> >> __rmqueue_fallback >> steal_suitable_fallback >> move_freepages_block // move to the list of start_migratetype >> expand // split the largest order >> list_add // add to the list of start_migratetype >> >> So how about use list_add_tail instead of list_add? Then we can merge the >> large >> block again as soon as the page freed. > > Same here. The lists are not empty, but contain probably just the pages > from our stolen pageblock. It shouldn't matter how we order them within > the same block. > > So maybe it could make some difference for higher-order allocations, but > it's unclear to me. Making e.g. expand() more complex with a flag to > tell it the head vs tail add could mean extra overhead in allocator fast > path that would offset any gains. > >> Thanks, >> Xishi Qiu >> > > > . >