On 01/18/2017 08:12 AM, Hillf Danton wrote:
On Wednesday, January 18, 2017 6:16 AM Vlastimil Babka wrote:@@ -3802,13 +3811,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, * Also recalculate the starting point for the zonelist iterator or * we could end up iterating over non-eligible zones endlessly. */Is the newly added comment still needed?
You're right that it's no longer true. I think we can even remove most of the zoneref trickery and non-NULL checks in the fastpath (as a cleanup patch on top), as the loop in get_page_from_freelist() should handle it just fine. IIRC Mel even did this in the microopt series, but I pointed out that NULL preferred_zoneref pointer would be dangerous in get_page_from_freelist(). We didn't realize that we check the wrong pointer (i.e. patch 1/4 here).
Vlastimil
- if (unlikely(ac.nodemask != nodemask)) { -no_zone: + if (unlikely(ac.nodemask != nodemask)) ac.nodemask = nodemask; - ac.preferred_zoneref = first_zones_zonelist(ac.zonelist, - ac.high_zoneidx, ac.nodemask); - /* If we have NULL preferred zone, slowpath wll handle that */ - } page = __alloc_pages_slowpath(alloc_mask, order, &ac); -- 2.11.0

