Wei,

On Thu, 28 Mar 2019, Wei Yang wrote:

please trim your replies. It's annoying if one has to search the content in
the middle of a large useless quote.

> On Sun, Mar 24, 2019 at 03:29:04PM +0100, Thomas Gleixner wrote:
> >Wei,
> >-static int __meminit split_mem_range(struct map_range *mr, int nr_range,
> >-                                 unsigned long start,
> >-                                 unsigned long end)
> >-{
> >-    unsigned long start_pfn, end_pfn, limit_pfn;
> >-    unsigned long pfn;
> >-    int i;
> >+    if (!IS_ALIGNED(mr->end, mi->size)) {
> >+            /* Try to fit as much as possible */
> >+            len = round_down(mr->end - mr->start, mi->size);
> >+            if (!len)
> >+                    return false;
> >+            mr->end = mr->start + len;
> >+    }
> > 
> >-    limit_pfn = PFN_DOWN(end);
> >+    /* Store the effective page size mask */
> >+    mr->page_size_mask = mi->mask;
> 
> I don't get the point here. Why store the effective page size mask just for
> the "middle" range.
> 
> The original behavior will set the "head" and "tail" range with a lower level
> page size mask.

What has this to do with the middle range? Nothing. This is the path where
the current map level (1g, 2m, 4k) is applied from mr->start to
mr->end. That's the effective mapping of this map_range entry. 

Thanks,

        tglx

Reply via email to