Hi Vishal,
"Verma, Vishal L" <vishal.l.ve...@intel.com> writes: > On Wed, 2023-06-21 at 11:36 +0530, Tarun Sahu wrote: >> Hi Alison, >> >> Alison Schofield <alison.schofi...@intel.com> writes: >> >> > On Tue, Jun 20, 2023 at 07:33:32PM +0530, Tarun Sahu wrote: >> > > memory_group_register_static takes maximum number of pages as the >> > > argument >> > > while dev_dax_kmem_probe passes total_len (in bytes) as the argument. >> > >> > This sounds like a fix. An explanation of the impact and a fixes tag >> > may be needed. Also, wondering how you found it. >> > >> Yes, it is a fix, I found it during dry code walk-through. >> There is not any impact as such. As, >> memory_group_register_static just set the max_pages limit which >> is used in auto_movable_zone_for_pfn to determine the zone. >> >> which might cause these condition to behave differently, >> >> This will be true always so jump will happen to kernel_zone >> if (!auto_movable_can_online_movable(NUMA_NO_NODE, group, nr_pages)) >> goto kernel_zone; >> --- >> kernel_zone: >> return default_kernel_zone_for_pfn(nid, pfn, nr_pages); >> >> --- >> >> Here, In below, zone_intersects compare range will be larger as nr_pages >> will be higher (derived from total_len passed in dev_dax_kmem_probe). >> >> static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long >> start_pfn, >> unsigned long nr_pages) >> { >> struct pglist_data *pgdat = NODE_DATA(nid); >> int zid; >> >> for (zid = 0; zid < ZONE_NORMAL; zid++) { >> struct zone *zone = &pgdat->node_zones[zid]; >> >> if (zone_intersects(zone, start_pfn, nr_pages)) >> return zone; >> } >> >> return &pgdat->node_zones[ZONE_NORMAL]; >> } >> >> In Mostly cases, ZONE_NORMAL will be returned. But there is no >> crash/panic issues involved here, only decision making on selecting zone >> is affected. >> > > Hi Tarun, > > Good find! With a Fixes tag, and perhaps inclusion of a bit more of > this detail described in the commit message, feel free to add: > Thanks for reviewing, sent the updated version. > Reviewed-by: Vishal Verma <vishal.l.ve...@intel.com>