On 10/27/22 at 02:28pm, Eric DeVolder wrote:
>
>
> On 10/27/22 08:52, Baoquan He wrote:
> > On 10/26/22 at 04:54pm, David Hildenbrand wrote:
> > > On 26.10.22 16:48, Baoquan He wrote:
> > > > On 10/25/22 at 12:31pm, Borislav Petkov wrote:
> > > > > On Thu, Oct 13, 2022 at 10:57:28AM +0800, Baoqua
On Fri, Oct 28, 2022 at 04:22:54PM -0500, Eric DeVolder wrote:
> /*
> * For the kexec_file_load() syscall path, specify the maximum number of
> * memory regions that the elfcorehdr buffer/segment can accommodate.
> * These regions are obtained via walk_system_ram_res(); eg. the
> * 'System RAM'
On 10/28/22 15:30, Borislav Petkov wrote:
On Fri, Oct 28, 2022 at 02:26:58PM -0500, Eric DeVolder wrote:
config CRASH_MAX_MEMORY_RANGES
depends on CRASH_DUMP && KEXEC_FILE && MEMORY_HOTPLUG
int
default 8192
help
For the kexec_file_load path, specify the maximum numb
On 10/28/22 15:30, Borislav Petkov wrote:
On Fri, Oct 28, 2022 at 02:26:58PM -0500, Eric DeVolder wrote:
config CRASH_MAX_MEMORY_RANGES
depends on CRASH_DUMP && KEXEC_FILE && MEMORY_HOTPLUG
int
default 8192
help
For the kexec_file_load path, specify the maximum numb
On Fri, Oct 28, 2022 at 02:26:58PM -0500, Eric DeVolder wrote:
> config CRASH_MAX_MEMORY_RANGES
> depends on CRASH_DUMP && KEXEC_FILE && MEMORY_HOTPLUG
> int
> default 8192
> help
> For the kexec_file_load path, specify the maximum number of
> memory regions, eg. as repr
On 10/28/22 12:06, Borislav Petkov wrote:
On Fri, Oct 28, 2022 at 10:29:45AM -0500, Eric DeVolder wrote:
So it is with this in mind that I suggest we stay with the statically sized
elfcorehdr buffer.
If that can be agreed upon, then it is "just a matter" of picking a useful
elfcorehdr size.
On Fri, Oct 28, 2022 at 10:29:45AM -0500, Eric DeVolder wrote:
> So it is with this in mind that I suggest we stay with the statically sized
> elfcorehdr buffer.
>
> If that can be agreed upon, then it is "just a matter" of picking a useful
> elfcorehdr size. Currently that size is derived from t
On 10/28/22 05:19, Borislav Petkov wrote:
On Thu, Oct 27, 2022 at 02:24:11PM -0500, Eric DeVolder wrote:
Be aware, in reality, that if the system was fully populated, it would not
actually consume all 8192 phdrs. Rather /proc/iomem would essentially show a
large contiguous address space which
On Thu, Oct 27, 2022 at 02:24:11PM -0500, Eric DeVolder wrote:
> Be aware, in reality, that if the system was fully populated, it would not
> actually consume all 8192 phdrs. Rather /proc/iomem would essentially show a
> large contiguous address space which would require just a single phdr.
Then t
On 10/27/22 08:52, Baoquan He wrote:
On 10/26/22 at 04:54pm, David Hildenbrand wrote:
On 26.10.22 16:48, Baoquan He wrote:
On 10/25/22 at 12:31pm, Borislav Petkov wrote:
On Thu, Oct 13, 2022 at 10:57:28AM +0800, Baoquan He wrote:
The concern to range number mainly is on Virt guest systems.
On 10/26/22 09:48, Baoquan He wrote:
On 10/25/22 at 12:31pm, Borislav Petkov wrote:
On Thu, Oct 13, 2022 at 10:57:28AM +0800, Baoquan He wrote:
The concern to range number mainly is on Virt guest systems.
And why would virt emulate 1K hotpluggable DIMM slots and not emulate a
real machine?
On 10/26/22 at 04:54pm, David Hildenbrand wrote:
> On 26.10.22 16:48, Baoquan He wrote:
> > On 10/25/22 at 12:31pm, Borislav Petkov wrote:
> > > On Thu, Oct 13, 2022 at 10:57:28AM +0800, Baoquan He wrote:
> > > > The concern to range number mainly is on Virt guest systems.
> > >
> > > And why woul
On 26.10.22 16:48, Baoquan He wrote:
On 10/25/22 at 12:31pm, Borislav Petkov wrote:
On Thu, Oct 13, 2022 at 10:57:28AM +0800, Baoquan He wrote:
The concern to range number mainly is on Virt guest systems.
And why would virt emulate 1K hotpluggable DIMM slots and not emulate a
real machine?
On 10/25/22 at 12:31pm, Borislav Petkov wrote:
> On Thu, Oct 13, 2022 at 10:57:28AM +0800, Baoquan He wrote:
> > The concern to range number mainly is on Virt guest systems.
>
> And why would virt emulate 1K hotpluggable DIMM slots and not emulate a
> real machine?
Well, currently, mem hotpug is
On Wed, Oct 12, 2022 at 11:20:59AM -0500, Eric DeVolder wrote:
> I once had CONFIG_CRASH_HOTPLUG, but you disagreed.
>
> https://lore.kernel.org/lkml/ylgot+ludql+g%2...@zn.tnic/
>
> From there I simply went with
>
> #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_MEMORY_HOTPLUG)
>
> which ro
On Thu, Oct 13, 2022 at 10:57:28AM +0800, Baoquan He wrote:
> The concern to range number mainly is on Virt guest systems.
And why would virt emulate 1K hotpluggable DIMM slots and not emulate a
real machine?
> On baremetal system, basically only very high end server support
> memory hotplug. I e
On 08/10/22 01:03, Eric DeVolder wrote:
On 9/19/22 02:06, Sourabh Jain wrote:
On 10/09/22 02:35, Eric DeVolder wrote:
For x86_64, when CPU or memory is hot un/plugged, the crash
elfcorehdr, which describes the CPUs and memory in the system,
must also be updated.
When loading the crash kern
On 10/12/22 15:19, Eric DeVolder wrote:
On 10/12/22 12:46, Borislav Petkov wrote:
On Sat, Oct 08, 2022 at 10:35:14AM +0800, Baoquan He wrote:
Memory hptlug is not limited by a certin or a max number of memory
regions, but limited by how large of the linear mapping range which
physical can
On Wed, Oct 12, 2022 at 03:19:19PM -0500, Eric DeVolder wrote:
> We run here QEMU with the ability for 1024 DIMM slots.
QEMU, haha.
What is the highest count of DIMM slots which are hotpluggable on a
real, *physical* system today? Are you saying you can have 1K DIMM slots
on a board?
I hardly do
On 10/12/22 12:46, Borislav Petkov wrote:
On Sat, Oct 08, 2022 at 10:35:14AM +0800, Baoquan He wrote:
Memory hptlug is not limited by a certin or a max number of memory
regions, but limited by how large of the linear mapping range which
physical can be mapped into.
Memory hotplug is not lim
On Sat, Oct 08, 2022 at 10:35:14AM +0800, Baoquan He wrote:
> Memory hptlug is not limited by a certin or a max number of memory
> regions, but limited by how large of the linear mapping range which
> physical can be mapped into.
Memory hotplug is not limited by some abstract range but by the *act
On 10/11/22 23:55, Sourabh Jain wrote:
If kmap_local_page() works for all archs, then I'm happy to drop these
arch_ variants and use it directly.
Yes, pls do.
I'll check with Sourabh to see if PPC can work with kmap_local_page().
I think kmap_local_page do support onĀ PowerPC. But can yo
On 9/30/22 12:40, Borislav Petkov wrote:
On Fri, Sep 30, 2022 at 12:11:26PM -0500, Eric DeVolder wrote:
There is of course a way to enumerate the memory regions in use on the
machine, that is not what this code needs. In order to compute the maximum
buffer size needed (this buffer size is com
On 08/10/22 01:30, Eric DeVolder wrote:
On 10/4/22 04:10, Sourabh Jain wrote:
On 30/09/22 21:06, Eric DeVolder wrote:
On 9/28/22 11:07, Borislav Petkov wrote:
On Tue, Sep 13, 2022 at 02:12:31PM -0500, Eric DeVolder wrote:
This topic was discussed previously
https://lkml.org/lkml/2022/3/
On 09/30/22 at 07:40pm, Borislav Petkov wrote:
> On Fri, Sep 30, 2022 at 12:11:26PM -0500, Eric DeVolder wrote:
> > There is of course a way to enumerate the memory regions in use on the
> > machine, that is not what this code needs. In order to compute the maximum
> > buffer size needed (this buff
On 10/4/22 04:10, Sourabh Jain wrote:
On 30/09/22 21:06, Eric DeVolder wrote:
On 9/28/22 11:07, Borislav Petkov wrote:
On Tue, Sep 13, 2022 at 02:12:31PM -0500, Eric DeVolder wrote:
This topic was discussed previously https://lkml.org/lkml/2022/3/3/372.
Please do not use lkml.org to ref
On 10/4/22 02:03, Sourabh Jain wrote:
On 30/09/22 21:06, Eric DeVolder wrote:
On 9/28/22 11:07, Borislav Petkov wrote:
On Tue, Sep 13, 2022 at 02:12:31PM -0500, Eric DeVolder wrote:
This topic was discussed previously https://lkml.org/lkml/2022/3/3/372.
Please do not use lkml.org to re
On 9/19/22 02:06, Sourabh Jain wrote:
On 10/09/22 02:35, Eric DeVolder wrote:
For x86_64, when CPU or memory is hot un/plugged, the crash
elfcorehdr, which describes the CPUs and memory in the system,
must also be updated.
When loading the crash kernel via kexec_load or kexec_file_load,
the
On 30/09/22 21:06, Eric DeVolder wrote:
On 9/28/22 11:07, Borislav Petkov wrote:
On Tue, Sep 13, 2022 at 02:12:31PM -0500, Eric DeVolder wrote:
This topic was discussed previously https://lkml.org/lkml/2022/3/3/372.
Please do not use lkml.org to refer to lkml messages. We have a
perfectly
On 30/09/22 21:06, Eric DeVolder wrote:
On 9/28/22 11:07, Borislav Petkov wrote:
On Tue, Sep 13, 2022 at 02:12:31PM -0500, Eric DeVolder wrote:
This topic was discussed previously https://lkml.org/lkml/2022/3/3/372.
Please do not use lkml.org to refer to lkml messages. We have a
perfectly
On Fri, Sep 30, 2022 at 12:11:26PM -0500, Eric DeVolder wrote:
> There is of course a way to enumerate the memory regions in use on the
> machine, that is not what this code needs. In order to compute the maximum
> buffer size needed (this buffer size is computed once), the count of the
> maximum n
On 9/30/22 11:50, Borislav Petkov wrote:
On Fri, Sep 30, 2022 at 10:36:49AM -0500, Eric DeVolder wrote:
Your help text talks about System RAM entries in /proc/iomem which means
that those entries are present somewhere in the kernel and you can read
them out and do the proper calculations dyna
On Fri, Sep 30, 2022 at 10:36:49AM -0500, Eric DeVolder wrote:
> > Your help text talks about System RAM entries in /proc/iomem which means
> > that those entries are present somewhere in the kernel and you can read
> > them out and do the proper calculations dynamically instead of doing the
> > st
On 9/28/22 11:07, Borislav Petkov wrote:
On Tue, Sep 13, 2022 at 02:12:31PM -0500, Eric DeVolder wrote:
This topic was discussed previously https://lkml.org/lkml/2022/3/3/372.
Please do not use lkml.org to refer to lkml messages. We have a
perfectly fine archival system at lore.kernel.org.
On Wed, Sep 28, 2022 at 06:07:24PM +0200, Borislav Petkov wrote:
> #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_MEMORY_HOTPLUG)
> /* Ensure elfcorehdr segment large enough for hotplug changes */
> @@ -407,9 +408,8 @@ int crash_load_segments(struct kimage *image)
> image->elf_heade
On Tue, Sep 13, 2022 at 02:12:31PM -0500, Eric DeVolder wrote:
> This topic was discussed previously https://lkml.org/lkml/2022/3/3/372.
Please do not use lkml.org to refer to lkml messages. We have a
perfectly fine archival system at lore.kernel.org. You simply do
https://lore.kernel.org/r/
whe
Boris,
I've a few questions for you below. With your responses, I am hopeful we can
finish this series soon!
Thanks,
eric
On 9/13/22 14:12, Eric DeVolder wrote:
Boris,
Thanks for the feedback! Inline responses below.
eric
On 9/12/22 01:52, Borislav Petkov wrote:
On Fri, Sep 09, 2022 at 05:05:
On 10/09/22 02:35, Eric DeVolder wrote:
For x86_64, when CPU or memory is hot un/plugged, the crash
elfcorehdr, which describes the CPUs and memory in the system,
must also be updated.
When loading the crash kernel via kexec_load or kexec_file_load,
the elfcorehdr is identified at run time in
Boris,
Thanks for the feedback! Inline responses below.
eric
On 9/12/22 01:52, Borislav Petkov wrote:
On Fri, Sep 09, 2022 at 05:05:09PM -0400, Eric DeVolder wrote:
For x86_64, when CPU or memory is hot un/plugged, the crash
elfcorehdr, which describes the CPUs and memory in the system,
must al
For x86_64, when CPU or memory is hot un/plugged, the crash
elfcorehdr, which describes the CPUs and memory in the system,
must also be updated.
When loading the crash kernel via kexec_load or kexec_file_load,
the elfcorehdr is identified at run time in
crash_core:handle_hotplug_event().
To updat
40 matches
Mail list logo