On Thu 30-03-17 13:54:53, Michal Hocko wrote:
[...]
> +static struct zone * __meminit move_pfn_range(int online_type, int nid,
> + unsigned long start_pfn, unsigned long nr_pages)
> +{
> + struct pglist_data *pgdat = NODE_DATA(nid);
> + struct zone *zone =
On Thu 30-03-17 13:54:53, Michal Hocko wrote:
[...]
> +static struct zone * __meminit move_pfn_range(int online_type, int nid,
> + unsigned long start_pfn, unsigned long nr_pages)
> +{
> + struct pglist_data *pgdat = NODE_DATA(nid);
> + struct zone *zone =
On Thu 30-03-17 13:54:53, Michal Hocko wrote:
[...]
> -static int __meminit __add_section(int nid, struct zone *zone,
> - unsigned long phys_start_pfn)
> +static int __meminit __add_section(int nid, unsigned long phys_start_pfn)
> {
> int ret;
> + int
On Thu 30-03-17 13:54:53, Michal Hocko wrote:
[...]
> -static int __meminit __add_section(int nid, struct zone *zone,
> - unsigned long phys_start_pfn)
> +static int __meminit __add_section(int nid, unsigned long phys_start_pfn)
> {
> int ret;
> + int
On Tue 04-04-17 14:21:19, Tobias Regnery wrote:
[...]
> Hi Michal,
Hi
> building an x86 allmodconfig with next-20170404 results in the following
> section mismatch warnings probably caused by this patch:
>
> WARNING: mm/built-in.o(.text+0x5a1c2): Section mismatch in reference from the
>
On Tue 04-04-17 14:21:19, Tobias Regnery wrote:
[...]
> Hi Michal,
Hi
> building an x86 allmodconfig with next-20170404 results in the following
> section mismatch warnings probably caused by this patch:
>
> WARNING: mm/built-in.o(.text+0x5a1c2): Section mismatch in reference from the
>
On 30.03.17, Michal Hocko wrote:
> From: Michal Hocko
>
> The current memory hotplug implementation relies on having all the
> struct pages associate with a zone during the physical hotplug phase
> (arch_add_memory->__add_pages->__add_section->__add_zone). In the vast
> majority
On 30.03.17, Michal Hocko wrote:
> From: Michal Hocko
>
> The current memory hotplug implementation relies on having all the
> struct pages associate with a zone during the physical hotplug phase
> (arch_add_memory->__add_pages->__add_section->__add_zone). In the vast
> majority of cases this
On Fri 31-03-17 14:18:08, Hillf Danton wrote:
>
> On March 30, 2017 7:55 PM Michal Hocko wrote:
> >
> > +static void __meminit resize_zone_range(struct zone *zone, unsigned long
> > start_pfn,
> > + unsigned long nr_pages)
> > +{
> > + unsigned long old_end_pfn = zone_end_pfn(zone);
On Fri 31-03-17 14:18:08, Hillf Danton wrote:
>
> On March 30, 2017 7:55 PM Michal Hocko wrote:
> >
> > +static void __meminit resize_zone_range(struct zone *zone, unsigned long
> > start_pfn,
> > + unsigned long nr_pages)
> > +{
> > + unsigned long old_end_pfn = zone_end_pfn(zone);
On March 30, 2017 7:55 PM Michal Hocko wrote:
>
> +static void __meminit resize_zone_range(struct zone *zone, unsigned long
> start_pfn,
> + unsigned long nr_pages)
> +{
> + unsigned long old_end_pfn = zone_end_pfn(zone);
> +
> + if (start_pfn < zone->zone_start_pfn)
> +
On March 30, 2017 7:55 PM Michal Hocko wrote:
>
> +static void __meminit resize_zone_range(struct zone *zone, unsigned long
> start_pfn,
> + unsigned long nr_pages)
> +{
> + unsigned long old_end_pfn = zone_end_pfn(zone);
> +
> + if (start_pfn < zone->zone_start_pfn)
> +
From: Michal Hocko
The current memory hotplug implementation relies on having all the
struct pages associate with a zone during the physical hotplug phase
(arch_add_memory->__add_pages->__add_section->__add_zone). In the vast
majority of cases this means that they are added to
From: Michal Hocko
The current memory hotplug implementation relies on having all the
struct pages associate with a zone during the physical hotplug phase
(arch_add_memory->__add_pages->__add_section->__add_zone). In the vast
majority of cases this means that they are added to ZONE_NORMAL. This
14 matches
Mail list logo