(I'm back now)
On Fri, Jun 26, 2020 at 02:47:06PM +0200, Michal Hocko wrote:
> On Mon 22-06-20 15:17:39, Daniel Jordan wrote:
> > Hello Michal,
> >
> > (I've been away and may be slow to respond for a little while)
> >
> > On Fri, Jun 19, 2020 at 02:07:04PM +0200, Michal Hocko wrote:
> > > I bel
On Mon 22-06-20 15:17:39, Daniel Jordan wrote:
> Hello Michal,
>
> (I've been away and may be slow to respond for a little while)
>
> On Fri, Jun 19, 2020 at 02:07:04PM +0200, Michal Hocko wrote:
> > On Tue 09-06-20 18:54:51, Daniel Jordan wrote:
> > [...]
> > > @@ -1390,6 +1391,15 @@ static unsi
Hello Michal,
(I've been away and may be slow to respond for a little while)
On Fri, Jun 19, 2020 at 02:07:04PM +0200, Michal Hocko wrote:
> On Tue 09-06-20 18:54:51, Daniel Jordan wrote:
> [...]
> > @@ -1390,6 +1391,15 @@ static unsigned long probe_memory_block_size(void)
> > goto do
On Tue 09-06-20 18:54:51, Daniel Jordan wrote:
[...]
> @@ -1390,6 +1391,15 @@ static unsigned long probe_memory_block_size(void)
> goto done;
> }
>
> + /*
> + * Use max block size to minimize overhead on bare metal, where
> + * alignment for memory hotplug isn't
On Thu, Jun 11, 2020 at 10:05:38AM -0700, Dave Hansen wrote:
> One other nit for this. We *do* have actual hardware hotplug, and I'm
> pretty sure the alignment guarantees for hardware hotplug are pretty
> weak. For instance, the alignment guarantees for persistent memory are
> still only 64MB ev
On 6/11/20 9:59 AM, Daniel Jordan wrote:
> On Thu, Jun 11, 2020 at 07:16:02AM -0700, Dave Hansen wrote:
>> On 6/9/20 3:54 PM, Daniel Jordan wrote:
>>> + /*
>>> +* Use max block size to minimize overhead on bare metal, where
>>> +* alignment for memory hotplug isn't a concern.
>>> +*/
On Thu, Jun 11, 2020 at 07:16:02AM -0700, Dave Hansen wrote:
> On 6/9/20 3:54 PM, Daniel Jordan wrote:
> > + /*
> > +* Use max block size to minimize overhead on bare metal, where
> > +* alignment for memory hotplug isn't a concern.
> > +*/
> > + if (hypervisor_is_type(X86_HYPER_NAT
On 6/9/20 3:54 PM, Daniel Jordan wrote:
> + /*
> + * Use max block size to minimize overhead on bare metal, where
> + * alignment for memory hotplug isn't a concern.
> + */
> + if (hypervisor_is_type(X86_HYPER_NATIVE)) {
> + bz = MAX_BLOCK_SIZE;
> + go
On Wed, Jun 10, 2020 at 09:30:00AM +0200, David Hildenbrand wrote:
> On 10.06.20 09:20, David Hildenbrand wrote:
> > On 10.06.20 00:54, Daniel Jordan wrote:
> >> @@ -1390,6 +1391,15 @@ static unsigned long probe_memory_block_size(void)
> >>goto done;
> >>}
> >>
> >> + /*
> >> +
On 10.06.20 09:20, David Hildenbrand wrote:
> On 10.06.20 00:54, Daniel Jordan wrote:
>> Some of our servers spend significant time at kernel boot initializing
>> memory block sysfs directories and then creating symlinks between them
>> and the corresponding nodes. The slowness happens because the
On 10.06.20 00:54, Daniel Jordan wrote:
> Some of our servers spend significant time at kernel boot initializing
> memory block sysfs directories and then creating symlinks between them
> and the corresponding nodes. The slowness happens because the machines
> get stuck with the smallest supported
On Tue, Jun 09, 2020 at 06:54:51PM -0400, Daniel Jordan wrote:
> Some of our servers spend significant time at kernel boot initializing
> memory block sysfs directories and then creating symlinks between them
> and the corresponding nodes. The slowness happens because the machines
> get stuck with
Some of our servers spend significant time at kernel boot initializing
memory block sysfs directories and then creating symlinks between them
and the corresponding nodes. The slowness happens because the machines
get stuck with the smallest supported memory block size on x86 (128M),
which results
13 matches
Mail list logo