Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-29 Thread Stanislav Kinsburskii
On Fri, Sep 29, 2023 at 01:13:24PM +0300, Shutemov, Kirill wrote:
> On Wed, Sep 27, 2023 at 07:46:36PM -0700, Stanislav Kinsburskii wrote:
> > I'd answer yes, "System MAP" must be persisted across kexec.
> > Could you elaborate on why there should be a mechanism to tell the
> > kernel anything special about the existent "System map" in this context?
> > Say, one can reserve a CMA region (or a crash kernel region, etc), store
> > there some data, and then pass it across kexec. Reserved CMA region will
> > still be a part of the "System MAP", won't it?
> 
> Em. When crash kernel starts all System RAM of the the first kernel
> becomes E820_TYPE_RESERVED and only memory pre-allocated for crash
> scenario becomes E820_TYPE_RAM. See crash_setup_memmap_entries().
> 
> Can't you go the same path? Report all deposited memory as
> E820_TYPE_RESERVED.
> 

Sure I can.
This approach will have the corresponding command line option as a
requirement, and therefore is less flexible. But if passing device tree
across kexec on x86 is the major concern, then of course I can change it
the way you suggest.

> Or do you have too many deposited memory ranges, so we would run out of
> e820 entries?
> 

No, I don't think I have.
I can imagine how such a pool with a lot of regions can exhaust e820
table, but the implementation currently proposed is based on CMA and thus
limited by 19 entires by default, so I guess running out of e820 entries
is unlikely in real world scenarios.

Thanks,
Stanislav

> -- 
>   Kiryl Shutsemau / Kirill A. Shutemov

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-29 Thread Shutemov, Kirill
On Wed, Sep 27, 2023 at 07:46:36PM -0700, Stanislav Kinsburskii wrote:
> I'd answer yes, "System MAP" must be persisted across kexec.
> Could you elaborate on why there should be a mechanism to tell the
> kernel anything special about the existent "System map" in this context?
> Say, one can reserve a CMA region (or a crash kernel region, etc), store
> there some data, and then pass it across kexec. Reserved CMA region will
> still be a part of the "System MAP", won't it?

Em. When crash kernel starts all System RAM of the the first kernel
becomes E820_TYPE_RESERVED and only memory pre-allocated for crash
scenario becomes E820_TYPE_RAM. See crash_setup_memmap_entries().

Can't you go the same path? Report all deposited memory as
E820_TYPE_RESERVED.

Or do you have too many deposited memory ranges, so we would run out of
e820 entries?

-- 
  Kiryl Shutsemau / Kirill A. Shutemov

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Stanislav Kinsburskii
On Fri, Sep 29, 2023 at 07:56:37AM +0800, Baoquan He wrote:
> On 09/27/23 at 07:46pm, Stanislav Kinsburskii wrote:
> > On Thu, Sep 28, 2023 at 12:16:31PM -0700, Dave Hansen wrote:
> > > On 9/27/23 17:38, Stanislav Kinsburskii wrote:
> > > > On Thu, Sep 28, 2023 at 11:00:12AM -0700, Dave Hansen wrote:
> > > >> On 9/27/23 17:02, Stanislav Kinsburskii wrote:
> > > >>> On Thu, Sep 28, 2023 at 10:29:32AM -0700, Dave Hansen wrote:
> > > >> ...
> > > >>> Well, not exactly. That's something I'd like to have indeed, but from 
> > > >>> my
> > > >>> POV this goal is out of scope of discussion at the moment.
> > > >>> Let me try to express it the same way you did above:
> > > >>>
> > > >>> 1. Boot some kernel
> > > >>> 2. Grow the deposited memory a bunch
> > > >>> 5. Kexec
> > > >>> 4. Kernel panic due to GPF upon accessing the memory deposited to
> > > >>> hypervisor.
> > > >>
> > > >> I basically consider this a bug in the first kernel.  It *can't* kexec
> > > >> when it's left RAM in shambles.  It doesn't know what features the new
> > > >> kernel has and whether this is even safe.
> > > >>
> > > > 
> > > > Could you elaborate more on why this is a bug in the first kernel?
> > > > Say, kernel memory can be allocated in big physically consequitive
> > > > chunks by the first kernel for depositing. The information about these
> > > > chunks is then passed the the second kernel via FDT or even command
> > > > line, so the seconds kernel can reserve this region during booting.
> > > > What's wrong with this approach?
> > > 
> > > How do you know the second kernel can parse the FDT entry or the
> > > command-line you pass to it?
> > > 
> > > >> Can the new kernel even read the new device tree data?
> > > > 
> > > > I'm not sure I understand the question, to be honest.
> > > > Why can't it? This series contains code parts for both first and seconds
> > > > kernels.
> > > 
> > > How do you know the second kernel isn't the version *before* this series
> > > gets merged?
> > > 
> > 
> > The answer to both questions above is the following: the feature is deployed
> > fleed-wide first, and enabled only upon the next deployment.
> > It worth mentioning, that fleet-wide deployments usually don't need to 
> > support
> > updates to a version older that the previous one.
> > Also, since kexec is initialited by user space, it always can be
> > enlightened about kernel capabilities and simply don't kexec to an
> > incompatible kernel version.
> > One more bit to mention, that it real life this problme exists only
> > during initial transition, as once the upgrade to a kernel with a
> > feature has happened, there won't be a revert to a versoin without it.
> > 
> > > ...
> > > >> I still think the only way this will possibly work when kexec'ing both
> > > >> old and new kernels is to do it with the memory maps that *all* kernels
> > > >> can read.
> > > > 
> > > > Could you elaborate more on this?
> > > > The avaiable memory map actually stays the same for both kernels. The
> > > > difference here can be in a different list of memory regions to reserve,
> > > > when the first kernel allocated and deposited another chunk, and thus
> > > > the second kernel needs to reserve this memory as a new region upon
> > > > booting.
> > > 
> > > Please take a step back from your implementation for a moment.  There
> > > are two basic design points that need to be considered.
> > > 
> > > First, *must* "System RAM" (according to the memory map) be persisted
> > > across kexec?  If no, then there's no problem to solve and we can stop
> > > this thread.  If yes, then some mechanism must be used to tell the new
> > > kernel that the "System RAM" in the memory map is not normal RAM.
> > > 
> > > Second, *if* we agree that some data must communicate across kexec, then
> > > what mechanism should be used?  You're arguing for a new mechanism that
> > > only new kernels can use.  I'm arguing that you should likely reuse an
> > > existing mechanism (probably the UEFI/e820 maps) so that *ALL* kernels
> > > can consume the information, old and new.
> > > 
> > 
> > I'd answer yes, "System MAP" must be persisted across kexec.
> > Could you elaborate on why there should be a mechanism to tell the
> > kernel anything special about the existent "System map" in this context?
> > Say, one can reserve a CMA region (or a crash kernel region, etc), store
> > there some data, and then pass it across kexec. Reserved CMA region will
> > still be a part of the "System MAP", won't it?
> 
> Well, I haven't gone through all the discusison thread and clearly got
> your intention and motivation. But here I have to say there's
> misunderstanding. At least I am astonished when I heard the above
> description. Who said a CMA region or a crahs kernel region need be
> passed across kexec. Think kexec as a bootloader, in essence it's no
> different than any other bootloader. When it jumps to 2nd kernel, the
> whole system will be booted up and reconstructed on the system 

Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Baoquan He
On 09/27/23 at 07:46pm, Stanislav Kinsburskii wrote:
> On Thu, Sep 28, 2023 at 12:16:31PM -0700, Dave Hansen wrote:
> > On 9/27/23 17:38, Stanislav Kinsburskii wrote:
> > > On Thu, Sep 28, 2023 at 11:00:12AM -0700, Dave Hansen wrote:
> > >> On 9/27/23 17:02, Stanislav Kinsburskii wrote:
> > >>> On Thu, Sep 28, 2023 at 10:29:32AM -0700, Dave Hansen wrote:
> > >> ...
> > >>> Well, not exactly. That's something I'd like to have indeed, but from my
> > >>> POV this goal is out of scope of discussion at the moment.
> > >>> Let me try to express it the same way you did above:
> > >>>
> > >>> 1. Boot some kernel
> > >>> 2. Grow the deposited memory a bunch
> > >>> 5. Kexec
> > >>> 4. Kernel panic due to GPF upon accessing the memory deposited to
> > >>> hypervisor.
> > >>
> > >> I basically consider this a bug in the first kernel.  It *can't* kexec
> > >> when it's left RAM in shambles.  It doesn't know what features the new
> > >> kernel has and whether this is even safe.
> > >>
> > > 
> > > Could you elaborate more on why this is a bug in the first kernel?
> > > Say, kernel memory can be allocated in big physically consequitive
> > > chunks by the first kernel for depositing. The information about these
> > > chunks is then passed the the second kernel via FDT or even command
> > > line, so the seconds kernel can reserve this region during booting.
> > > What's wrong with this approach?
> > 
> > How do you know the second kernel can parse the FDT entry or the
> > command-line you pass to it?
> > 
> > >> Can the new kernel even read the new device tree data?
> > > 
> > > I'm not sure I understand the question, to be honest.
> > > Why can't it? This series contains code parts for both first and seconds
> > > kernels.
> > 
> > How do you know the second kernel isn't the version *before* this series
> > gets merged?
> > 
> 
> The answer to both questions above is the following: the feature is deployed
> fleed-wide first, and enabled only upon the next deployment.
> It worth mentioning, that fleet-wide deployments usually don't need to support
> updates to a version older that the previous one.
> Also, since kexec is initialited by user space, it always can be
> enlightened about kernel capabilities and simply don't kexec to an
> incompatible kernel version.
> One more bit to mention, that it real life this problme exists only
> during initial transition, as once the upgrade to a kernel with a
> feature has happened, there won't be a revert to a versoin without it.
> 
> > ...
> > >> I still think the only way this will possibly work when kexec'ing both
> > >> old and new kernels is to do it with the memory maps that *all* kernels
> > >> can read.
> > > 
> > > Could you elaborate more on this?
> > > The avaiable memory map actually stays the same for both kernels. The
> > > difference here can be in a different list of memory regions to reserve,
> > > when the first kernel allocated and deposited another chunk, and thus
> > > the second kernel needs to reserve this memory as a new region upon
> > > booting.
> > 
> > Please take a step back from your implementation for a moment.  There
> > are two basic design points that need to be considered.
> > 
> > First, *must* "System RAM" (according to the memory map) be persisted
> > across kexec?  If no, then there's no problem to solve and we can stop
> > this thread.  If yes, then some mechanism must be used to tell the new
> > kernel that the "System RAM" in the memory map is not normal RAM.
> > 
> > Second, *if* we agree that some data must communicate across kexec, then
> > what mechanism should be used?  You're arguing for a new mechanism that
> > only new kernels can use.  I'm arguing that you should likely reuse an
> > existing mechanism (probably the UEFI/e820 maps) so that *ALL* kernels
> > can consume the information, old and new.
> > 
> 
> I'd answer yes, "System MAP" must be persisted across kexec.
> Could you elaborate on why there should be a mechanism to tell the
> kernel anything special about the existent "System map" in this context?
> Say, one can reserve a CMA region (or a crash kernel region, etc), store
> there some data, and then pass it across kexec. Reserved CMA region will
> still be a part of the "System MAP", won't it?

Well, I haven't gone through all the discusison thread and clearly got
your intention and motivation. But here I have to say there's
misunderstanding. At least I am astonished when I heard the above
description. Who said a CMA region or a crahs kernel region need be
passed across kexec. Think kexec as a bootloader, in essence it's no
different than any other bootloader. When it jumps to 2nd kernel, the
whole system will be booted up and reconstructed on the system resources.
All the difference kexec has is it won't go through firmware to do those
detecting/testing/init. If the intentionn is to preserve any state or
region in 1st kernel, you absolutely got it wrong.

This is not the first time people want to put burden on kexec 

Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Stanislav Kinsburskii
On Thu, Sep 28, 2023 at 12:16:31PM -0700, Dave Hansen wrote:
> On 9/27/23 17:38, Stanislav Kinsburskii wrote:
> > On Thu, Sep 28, 2023 at 11:00:12AM -0700, Dave Hansen wrote:
> >> On 9/27/23 17:02, Stanislav Kinsburskii wrote:
> >>> On Thu, Sep 28, 2023 at 10:29:32AM -0700, Dave Hansen wrote:
> >> ...
> >>> Well, not exactly. That's something I'd like to have indeed, but from my
> >>> POV this goal is out of scope of discussion at the moment.
> >>> Let me try to express it the same way you did above:
> >>>
> >>> 1. Boot some kernel
> >>> 2. Grow the deposited memory a bunch
> >>> 5. Kexec
> >>> 4. Kernel panic due to GPF upon accessing the memory deposited to
> >>> hypervisor.
> >>
> >> I basically consider this a bug in the first kernel.  It *can't* kexec
> >> when it's left RAM in shambles.  It doesn't know what features the new
> >> kernel has and whether this is even safe.
> >>
> > 
> > Could you elaborate more on why this is a bug in the first kernel?
> > Say, kernel memory can be allocated in big physically consequitive
> > chunks by the first kernel for depositing. The information about these
> > chunks is then passed the the second kernel via FDT or even command
> > line, so the seconds kernel can reserve this region during booting.
> > What's wrong with this approach?
> 
> How do you know the second kernel can parse the FDT entry or the
> command-line you pass to it?
> 
> >> Can the new kernel even read the new device tree data?
> > 
> > I'm not sure I understand the question, to be honest.
> > Why can't it? This series contains code parts for both first and seconds
> > kernels.
> 
> How do you know the second kernel isn't the version *before* this series
> gets merged?
> 

The answer to both questions above is the following: the feature is deployed
fleed-wide first, and enabled only upon the next deployment.
It worth mentioning, that fleet-wide deployments usually don't need to support
updates to a version older that the previous one.
Also, since kexec is initialited by user space, it always can be
enlightened about kernel capabilities and simply don't kexec to an
incompatible kernel version.
One more bit to mention, that it real life this problme exists only
during initial transition, as once the upgrade to a kernel with a
feature has happened, there won't be a revert to a versoin without it.

> ...
> >> I still think the only way this will possibly work when kexec'ing both
> >> old and new kernels is to do it with the memory maps that *all* kernels
> >> can read.
> > 
> > Could you elaborate more on this?
> > The avaiable memory map actually stays the same for both kernels. The
> > difference here can be in a different list of memory regions to reserve,
> > when the first kernel allocated and deposited another chunk, and thus
> > the second kernel needs to reserve this memory as a new region upon
> > booting.
> 
> Please take a step back from your implementation for a moment.  There
> are two basic design points that need to be considered.
> 
> First, *must* "System RAM" (according to the memory map) be persisted
> across kexec?  If no, then there's no problem to solve and we can stop
> this thread.  If yes, then some mechanism must be used to tell the new
> kernel that the "System RAM" in the memory map is not normal RAM.
> 
> Second, *if* we agree that some data must communicate across kexec, then
> what mechanism should be used?  You're arguing for a new mechanism that
> only new kernels can use.  I'm arguing that you should likely reuse an
> existing mechanism (probably the UEFI/e820 maps) so that *ALL* kernels
> can consume the information, old and new.
> 

I'd answer yes, "System MAP" must be persisted across kexec.
Could you elaborate on why there should be a mechanism to tell the
kernel anything special about the existent "System map" in this context?
Say, one can reserve a CMA region (or a crash kernel region, etc), store
there some data, and then pass it across kexec. Reserved CMA region will
still be a part of the "System MAP", won't it?

Regarding the communication mechanism, device tree is not the only one
indeed.
However, could you elaborate on how e820 extension can help to
communicate thing here without introducing new ABI?
And if it can't then done without a new ABI, then why e820 extension is
better than a device tree extension? AFAIU e820 isn't really designed to
pass arbitrary data bits in it.
Are you suggesting to intoduce another e820_type like E820_TYPE_PMPOOL?

> I'm not convinced that this series is going in the right direction on
> either of those points.
> 

I understand the skepticism. I appreciate your efforts in helping to
find a solution.

> > Can all this considered, as, say, the first kernel uses device tree to
> > inform the second kernel about the memory regions to reserve?
> > In this case the first kernel behaves a bit like a firmware piece for
> > the second one.
> > 
> >> Can the hypervisor be improved to make this release operation faster?
> > 
> > I 

Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Dave Hansen
On 9/27/23 17:38, Stanislav Kinsburskii wrote:
> On Thu, Sep 28, 2023 at 11:00:12AM -0700, Dave Hansen wrote:
>> On 9/27/23 17:02, Stanislav Kinsburskii wrote:
>>> On Thu, Sep 28, 2023 at 10:29:32AM -0700, Dave Hansen wrote:
>> ...
>>> Well, not exactly. That's something I'd like to have indeed, but from my
>>> POV this goal is out of scope of discussion at the moment.
>>> Let me try to express it the same way you did above:
>>>
>>> 1. Boot some kernel
>>> 2. Grow the deposited memory a bunch
>>> 5. Kexec
>>> 4. Kernel panic due to GPF upon accessing the memory deposited to
>>> hypervisor.
>>
>> I basically consider this a bug in the first kernel.  It *can't* kexec
>> when it's left RAM in shambles.  It doesn't know what features the new
>> kernel has and whether this is even safe.
>>
> 
> Could you elaborate more on why this is a bug in the first kernel?
> Say, kernel memory can be allocated in big physically consequitive
> chunks by the first kernel for depositing. The information about these
> chunks is then passed the the second kernel via FDT or even command
> line, so the seconds kernel can reserve this region during booting.
> What's wrong with this approach?

How do you know the second kernel can parse the FDT entry or the
command-line you pass to it?

>> Can the new kernel even read the new device tree data?
> 
> I'm not sure I understand the question, to be honest.
> Why can't it? This series contains code parts for both first and seconds
> kernels.

How do you know the second kernel isn't the version *before* this series
gets merged?

...
>> I still think the only way this will possibly work when kexec'ing both
>> old and new kernels is to do it with the memory maps that *all* kernels
>> can read.
> 
> Could you elaborate more on this?
> The avaiable memory map actually stays the same for both kernels. The
> difference here can be in a different list of memory regions to reserve,
> when the first kernel allocated and deposited another chunk, and thus
> the second kernel needs to reserve this memory as a new region upon
> booting.

Please take a step back from your implementation for a moment.  There
are two basic design points that need to be considered.

First, *must* "System RAM" (according to the memory map) be persisted
across kexec?  If no, then there's no problem to solve and we can stop
this thread.  If yes, then some mechanism must be used to tell the new
kernel that the "System RAM" in the memory map is not normal RAM.

Second, *if* we agree that some data must communicate across kexec, then
what mechanism should be used?  You're arguing for a new mechanism that
only new kernels can use.  I'm arguing that you should likely reuse an
existing mechanism (probably the UEFI/e820 maps) so that *ALL* kernels
can consume the information, old and new.

I'm not convinced that this series is going in the right direction on
either of those points.

> Can all this considered, as, say, the first kernel uses device tree to
> inform the second kernel about the memory regions to reserve?
> In this case the first kernel behaves a bit like a firmware piece for
> the second one.
> 
>> Can the hypervisor be improved to make this release operation faster?
> 
> I guess it can, but shutting down guests contributes to downtime the
> most. And without shutting down the guests the deposited memory can't be
> withdrawn.

Do you really need to fully shut down each guest?  Or do you just need
to get them to a quiescent state where the hypervisor and devices aren't
writing to the deposited memory?

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Stanislav Kinsburskii
On Thu, Sep 28, 2023 at 11:00:12AM -0700, Dave Hansen wrote:
> On 9/27/23 17:02, Stanislav Kinsburskii wrote:
> > On Thu, Sep 28, 2023 at 10:29:32AM -0700, Dave Hansen wrote:
> ...
> > Well, not exactly. That's something I'd like to have indeed, but from my
> > POV this goal is out of scope of discussion at the moment.
> > Let me try to express it the same way you did above:
> > 
> > 1. Boot some kernel
> > 2. Grow the deposited memory a bunch
> > 5. Kexec
> > 4. Kernel panic due to GPF upon accessing the memory deposited to
> > hypervisor.
> 
> I basically consider this a bug in the first kernel.  It *can't* kexec
> when it's left RAM in shambles.  It doesn't know what features the new
> kernel has and whether this is even safe.
> 

Could you elaborate more on why this is a bug in the first kernel?
Say, kernel memory can be allocated in big physically consequitive
chunks by the first kernel for depositing. The information about these
chunks is then passed the the second kernel via FDT or even command
line, so the seconds kernel can reserve this region during booting.
What's wrong with this approach?

> Can the new kernel even read the new device tree data?
> 

I'm not sure I understand the question, to be honest.
Why can't it? This series contains code parts for both first and seconds
kernels.

> >> Can't the deposited memory just be shrunk before kexec?  Surely there
> >> aren't a bunch of pathological things consuming that memory right before
> >> kexec, which is basically a reboot.
> > 
> > In general it can. But for this to happen hypervisor needs to release
> > this memory. And it can release the memory iff the guests are stopped.
> > And stopping the guests during kexec isn't something we want to have in the
> > long run.
> > Also, even if we stop the guests before kexec, we need to restart them
> > after boot meaning we have to deposit the pages once again.
> > All this: stopping the guests, withdrawing the pages upon kexec,
> > allocating after boot and depostiting them again significatnly affect
> > guests downtime.
> 
> Ahh, and you're presumably kexec'ing in the first place because you've
> got a bug in the first kernel and you want a second kernel with fewer bugs.
> 

Right. All this is for "kernel servicing" purposes, when kexec is used
to update the kernel in a fleet with in attempt to reduce users downtime
as mush as possible.
I'm sorry for keeping this bit of context to myself instead of
explicitly stating it the series description: it wasn't intentional.

> I still think the only way this will possibly work when kexec'ing both
> old and new kernels is to do it with the memory maps that *all* kernels
> can read.
> 

Could you elaborate more on this?
The avaiable memory map actually stays the same for both kernels. The
difference here can be in a different list of memory regions to reserve,
when the first kernel allocated and deposited another chunk, and thus
the second kernel needs to reserve this memory as a new region upon
booting.

Can all this considered, as, say, the first kernel uses device tree to
inform the second kernel about the memory regions to reserve?
In this case the first kernel behaves a bit like a firmware piece for
the second one.

> Can the hypervisor be improved to make this release operation faster?

I guess it can, but shutting down guests contributes to downtime the
most. And without shutting down the guests the deposited memory can't be
withdrawn.

Thanks,
Stanislav

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


RE: [EXTERNAL] Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread KY Srinivasan



> -Original Message-
> From: Dave Hansen 
> Sent: Thursday, September 28, 2023 10:38 AM
> To: David Hildenbrand ; Stanislav Kinsburskii
> ; Baoquan He 
> Cc: t...@linutronix.de; mi...@redhat.com; b...@alien8.de;
> dave.han...@linux.intel.com; x...@kernel.org; h...@zytor.com;
> ebied...@xmission.com; a...@linux-foundation.org;
> stanislav.kinsburs...@gmail.com; cor...@lwn.net; linux-
> ker...@vger.kernel.org; kexec@lists.infradead.org; linux...@kvack.org; KY
> Srinivasan ; jgow...@amazon.com; wei@kernel.org;
> a...@arndb.de; gre...@linuxfoundation.org; g...@amazon.de;
> pbonz...@redhat.com
> Subject: [EXTERNAL] Re: [RFC PATCH v2 0/7] Introduce persistent memory pool
> 
> On 9/28/23 10:35, David Hildenbrand wrote:
> > On 28.09.23 15:22, Dave Hansen wrote:
> >> On 9/27/23 09:13, Stanislav Kinsburskii wrote:
> >>> Once deposited, these pages can't be accessed by Linux anymore and
> >>> thus must be preserved in "used" state across kexec, as hypervisor
> >>> state is unware of kexec.
> >>
> >> If Linux can't access them, they're not RAM any more.  I'd much
> >> rather remove them from the memory map and move on with life rather
> >> than implement a bunch of new ABI that's got to be handed across kernels.
> >
> > The motivation of handling kexec (faster?) in a hyper-v domain doesn't
> > sound particularly compelling got me for such features. If you
> > inflated memory, just don't allow to kexec. It's been broken for years IIUC.
> 
> That's a good point.  What prevents deflating before kexec?


The guest has returned the memory to the host as part of inflating the balloon 
and so,
this memory has to be returned by the host before you can deflate. The best 
option
is to not kexec when the memory has been returned to the host.

Regards,

K. Y
___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Dave Hansen
On 9/27/23 17:02, Stanislav Kinsburskii wrote:
> On Thu, Sep 28, 2023 at 10:29:32AM -0700, Dave Hansen wrote:
...
> Well, not exactly. That's something I'd like to have indeed, but from my
> POV this goal is out of scope of discussion at the moment.
> Let me try to express it the same way you did above:
> 
> 1. Boot some kernel
> 2. Grow the deposited memory a bunch
> 5. Kexec
> 4. Kernel panic due to GPF upon accessing the memory deposited to
> hypervisor.

I basically consider this a bug in the first kernel.  It *can't* kexec
when it's left RAM in shambles.  It doesn't know what features the new
kernel has and whether this is even safe.

Can the new kernel even read the new device tree data?

>> Can't the deposited memory just be shrunk before kexec?  Surely there
>> aren't a bunch of pathological things consuming that memory right before
>> kexec, which is basically a reboot.
> 
> In general it can. But for this to happen hypervisor needs to release
> this memory. And it can release the memory iff the guests are stopped.
> And stopping the guests during kexec isn't something we want to have in the
> long run.
> Also, even if we stop the guests before kexec, we need to restart them
> after boot meaning we have to deposit the pages once again.
> All this: stopping the guests, withdrawing the pages upon kexec,
> allocating after boot and depostiting them again significatnly affect
> guests downtime.

Ahh, and you're presumably kexec'ing in the first place because you've
got a bug in the first kernel and you want a second kernel with fewer bugs.

I still think the only way this will possibly work when kexec'ing both
old and new kernels is to do it with the memory maps that *all* kernels
can read.

Can the hypervisor be improved to make this release operation faster?

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Stanislav Kinsburskii
On Thu, Sep 28, 2023 at 10:29:32AM -0700, Dave Hansen wrote:
> On 9/27/23 16:25, Stanislav Kinsburskii wrote:
> > On Thu, Sep 28, 2023 at 06:22:54AM -0700, Dave Hansen wrote:
> >> On 9/27/23 09:13, Stanislav Kinsburskii wrote:
> >>> Once deposited, these pages can't be accessed by Linux anymore and thus
> >>> must be preserved in "used" state across kexec, as hypervisor state is
> >>> unware of kexec.
> >>
> >> If Linux can't access them, they're not RAM any more.  I'd much rather
> >> remove them from the memory map and move on with life rather than
> >> implement a bunch of new ABI that's got to be handed across kernels.
> > 
> > Could you elaborate more on the new ABIs? FDT is handled by x86 already,
> > and passing it over kexec looks like a natural extension.
> > Also, adding more state to it also doens't look like a new ABI.
> > Or does it?
> 
> FDT makes it easier to pass arbitrary data around, but you're still
> creating a new "default_pmpool" device tree node on one end and
> consuming it on the other.  That's a new ABI in my book.
> 

Well, then yes, it's a new ABI.
I guess it can still be named as "linux,cma", but then another
compatibility needs to be introduced, and that's again a new ABI, isn't
it?

> > Let me also comment on removing this regions from the memory map. The
> > major peculiarity here is that hypervisor distinguish between the pages,
> > deposited for guests to rnu and the pages deposited for the Linux root
> > partition to keep the guest-related portion of hypervisor state in the
> > root partition. And the latter is the matter in question.
> > 
> > We can indeed isolate and deposit a excessive amount of memory upfront
> > in hope that hypervisor will never get into the situation, when it needs
> > more memory.
> > However, it's not reliable, as the amount of memory will always be an
> > estimation, depending on the number of expected guests, guest-attached
> > devices, etc. And this becomes even a bigger problem when most of the
> > memory is already removed from the memory map to host guest partitions.
> > It's also not efficient as the amount of memory required by hypervisor
> > can grow or shrink depending on the use case or host configuration, and
> > deposting excessive amount of memory will be a waste.
> > 
> > But, actually, the idea of removing the pages from memory map was
> > reflected to some extent in the first version of this proposal,
> > so let me elaborate on it a bit.
> > 
> > Effectively, instead of reserving and depositing a lot of memory to
> > hypervisor upfront, the memory can be allocated from kernel memory when
> > needed and then returned back when unused.
> > This would still require pages removal from the memory map upon kexec,
> > but that's another problem.
> 
> Let's distill this down a bit.
> 
> I agree that it's a waste to reserve an obscene amount of memory up
> front for all guests for rare cases.  Having the amount of consumed
> memory grow is a nice feature.
> 
> You can also quite easily *shrink* the amount of memory on a given
> kernel without new code.  Right?
> 
> The problem comes when you've grown the footprint of hypervisor-donated
> memory, kexec, and *THEN* want to shrink it.  That's what needs new
> metadata to be communicated over to the new kernel.
> 
> 1. Boot some kernel
> 2. Grow the deposited memory a bunch
> 3. Kexec
> 4. Shrink the deposited memory
> 
> Right?
> 

Well, not exactly. That's something I'd like to have indeed, but from my
POV this goal is out of scope of discussion at the moment.
Let me try to express it the same way you did above:

1. Boot some kernel
2. Grow the deposited memory a bunch
5. Kexec
4. Kernel panic due to GPF upon accessing the memory deposited to
hypervisor.

> That's where you lose me.
> 
> Can't the deposited memory just be shrunk before kexec?  Surely there
> aren't a bunch of pathological things consuming that memory right before
> kexec, which is basically a reboot.

In general it can. But for this to happen hypervisor needs to release
this memory. And it can release the memory iff the guests are stopped.
And stopping the guests during kexec isn't something we want to have in the
long run.
Also, even if we stop the guests before kexec, we need to restart them
after boot meaning we have to deposit the pages once again.
All this: stopping the guests, withdrawing the pages upon kexec,
allocating after boot and depostiting them again significatnly affect
guests downtime.

Thanks,
Stanislav

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Dave Hansen
On 9/28/23 10:35, David Hildenbrand wrote:
> On 28.09.23 15:22, Dave Hansen wrote:
>> On 9/27/23 09:13, Stanislav Kinsburskii wrote:
>>> Once deposited, these pages can't be accessed by Linux anymore and thus
>>> must be preserved in "used" state across kexec, as hypervisor state is
>>> unware of kexec.
>>
>> If Linux can't access them, they're not RAM any more.  I'd much rather
>> remove them from the memory map and move on with life rather than
>> implement a bunch of new ABI that's got to be handed across kernels.
> 
> The motivation of handling kexec (faster?) in a hyper-v domain doesn't
> sound particularly compelling got me for such features. If you inflated
> memory, just don't allow to kexec. It's been broken for years IIUC.

That's a good point.  What prevents deflating before kexec?

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread David Hildenbrand

On 28.09.23 15:22, Dave Hansen wrote:

On 9/27/23 09:13, Stanislav Kinsburskii wrote:

Once deposited, these pages can't be accessed by Linux anymore and thus
must be preserved in "used" state across kexec, as hypervisor state is
unware of kexec.


If Linux can't access them, they're not RAM any more.  I'd much rather
remove them from the memory map and move on with life rather than
implement a bunch of new ABI that's got to be handed across kernels.


The motivation of handling kexec (faster?) in a hyper-v domain doesn't 
sound particularly compelling got me for such features. If you inflated 
memory, just don't allow to kexec. It's been broken for years IIUC.


Maybe the other use cases are more "relevant".

--
Cheers,

David / dhildenb


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread David Hildenbrand

On 28.09.23 12:25, Baoquan He wrote:

On 09/27/23 at 09:13am, Stanislav Kinsburskii wrote:

On Wed, Sep 27, 2023 at 01:44:38PM +0800, Baoquan He wrote:

Hi Stanislav,

On 09/25/23 at 02:27pm, Stanislav Kinsburskii wrote:

This patch introduces a memory allocator specifically tailored for
persistent memory within the kernel. The allocator maintains
kernel-specific states like DMA passthrough device states, IOMMU state, and
more across kexec.


Can you give more details about how this persistent memory pool will be
utilized in a actual scenario? I mean, what problem have you met so that
you have to introduce persistent memory pool to solve it?



The major reason we have at the moment, is that Linux root partition
running on top of the Microsoft hypervisor needs to deposit pages to
hypervisor in runtime, when hypervisor runs out of memory.
"Depositing" here means, that Linux passes a set of its PFNs to the
hypervisor via hypercall, and hypervisor then uses these pages for its
own needs.

Once deposited, these pages can't be accessed by Linux anymore and thus
must be preserved in "used" state across kexec, as hypervisor state is
unware of kexec. In the same time, these pages can we withdrawn when
usused. Thus, an allocator persistent across kexec looks reasonable for
this particular matter.


Thanks for these details.
  
The deposit and withdraw remind me the Balloon driver, David's virtio-mem,

DLPAR on ppc which can hot increasing or shrinking phisical memory on guest
OS. Can't microsoft hypervisor do the similar thing to reclaim or give
back the memory from or to the 'Linux root partition' running on top of
the hypervisor?


virtio-mem was designed with kexec support in mind. You only expose the 
initial memory to the second kernel, and that memory can never have such 
holes. That does not apply to memory ballooning implementations, like 
Hyper-V dynamic memory.


In the virtio-mem paper I have the following:

"In our experiments, Hyper-V VMs crashed reliably when
trying to use kexec under Linux for fast OS reboots with
an inflated balloon. Other memory ballooning mechanisms
either have to temporarily deflate the whole balloon or al-
low access to inflated memory, which is undesired in cloud
environments."

I remember XEN does something elaborate, whereby they allow access to 
all inflated memory during reboot, but limit the total number of pages 
they will hand out. IIRC, you then have to work around things like 
"Windows initializes all memory with 0s when booting, and cope with 
that". So there are ways how hypervisors handled that in the past.


--
Cheers,

David / dhildenb


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Dave Hansen
On 9/27/23 16:25, Stanislav Kinsburskii wrote:
> On Thu, Sep 28, 2023 at 06:22:54AM -0700, Dave Hansen wrote:
>> On 9/27/23 09:13, Stanislav Kinsburskii wrote:
>>> Once deposited, these pages can't be accessed by Linux anymore and thus
>>> must be preserved in "used" state across kexec, as hypervisor state is
>>> unware of kexec.
>>
>> If Linux can't access them, they're not RAM any more.  I'd much rather
>> remove them from the memory map and move on with life rather than
>> implement a bunch of new ABI that's got to be handed across kernels.
> 
> Could you elaborate more on the new ABIs? FDT is handled by x86 already,
> and passing it over kexec looks like a natural extension.
> Also, adding more state to it also doens't look like a new ABI.
> Or does it?

FDT makes it easier to pass arbitrary data around, but you're still
creating a new "default_pmpool" device tree node on one end and
consuming it on the other.  That's a new ABI in my book.

> Let me also comment on removing this regions from the memory map. The
> major peculiarity here is that hypervisor distinguish between the pages,
> deposited for guests to rnu and the pages deposited for the Linux root
> partition to keep the guest-related portion of hypervisor state in the
> root partition. And the latter is the matter in question.
> 
> We can indeed isolate and deposit a excessive amount of memory upfront
> in hope that hypervisor will never get into the situation, when it needs
> more memory.
> However, it's not reliable, as the amount of memory will always be an
> estimation, depending on the number of expected guests, guest-attached
> devices, etc. And this becomes even a bigger problem when most of the
> memory is already removed from the memory map to host guest partitions.
> It's also not efficient as the amount of memory required by hypervisor
> can grow or shrink depending on the use case or host configuration, and
> deposting excessive amount of memory will be a waste.
> 
> But, actually, the idea of removing the pages from memory map was
> reflected to some extent in the first version of this proposal,
> so let me elaborate on it a bit.
> 
> Effectively, instead of reserving and depositing a lot of memory to
> hypervisor upfront, the memory can be allocated from kernel memory when
> needed and then returned back when unused.
> This would still require pages removal from the memory map upon kexec,
> but that's another problem.

Let's distill this down a bit.

I agree that it's a waste to reserve an obscene amount of memory up
front for all guests for rare cases.  Having the amount of consumed
memory grow is a nice feature.

You can also quite easily *shrink* the amount of memory on a given
kernel without new code.  Right?

The problem comes when you've grown the footprint of hypervisor-donated
memory, kexec, and *THEN* want to shrink it.  That's what needs new
metadata to be communicated over to the new kernel.

1. Boot some kernel
2. Grow the deposited memory a bunch
3. Kexec
4. Shrink the deposited memory

Right?

That's where you lose me.

Can't the deposited memory just be shrunk before kexec?  Surely there
aren't a bunch of pathological things consuming that memory right before
kexec, which is basically a reboot.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Stanislav Kinsburskii
On Thu, Sep 28, 2023 at 06:22:54AM -0700, Dave Hansen wrote:
> On 9/27/23 09:13, Stanislav Kinsburskii wrote:
> > Once deposited, these pages can't be accessed by Linux anymore and thus
> > must be preserved in "used" state across kexec, as hypervisor state is
> > unware of kexec.
> 
> If Linux can't access them, they're not RAM any more.  I'd much rather
> remove them from the memory map and move on with life rather than
> implement a bunch of new ABI that's got to be handed across kernels.

Could you elaborate more on the new ABIs? FDT is handled by x86 already,
and passing it over kexec looks like a natural extension.
Also, adding more state to it also doens't look like a new ABI.
Or does it?

Let me also comment on removing this regions from the memory map. The
major peculiarity here is that hypervisor distinguish between the pages,
deposited for guests to rnu and the pages deposited for the Linux root
partition to keep the guest-related portion of hypervisor state in the
root partition. And the latter is the matter in question.

We can indeed isolate and deposit a excessive amount of memory upfront
in hope that hypervisor will never get into the situation, when it needs
more memory.
However, it's not reliable, as the amount of memory will always be an
estimation, depending on the number of expected guests, guest-attached
devices, etc. And this becomes even a bigger problem when most of the
memory is already removed from the memory map to host guest partitions.
It's also not efficient as the amount of memory required by hypervisor
can grow or shrink depending on the use case or host configuration, and
deposting excessive amount of memory will be a waste.

But, actually, the idea of removing the pages from memory map was
reflected to some extent in the first version of this proposal,
so let me elaborate on it a bit.

Effectively, instead of reserving and depositing a lot of memory to
hypervisor upfront, the memory can be allocated from kernel memory when
needed and then returned back when unused.
This would still require pages removal from the memory map upon kexec,
but that's another problem.

Thanks,
Stanislav


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Stanislav Kinsburskii
On Thu, Sep 28, 2023 at 06:25:44PM +0800, Baoquan He wrote:
> On 09/27/23 at 09:13am, Stanislav Kinsburskii wrote:
> > On Wed, Sep 27, 2023 at 01:44:38PM +0800, Baoquan He wrote:
> > > Hi Stanislav,
> > > 
> > > On 09/25/23 at 02:27pm, Stanislav Kinsburskii wrote:
> > > > This patch introduces a memory allocator specifically tailored for
> > > > persistent memory within the kernel. The allocator maintains
> > > > kernel-specific states like DMA passthrough device states, IOMMU state, 
> > > > and
> > > > more across kexec.
> > > 
> > > Can you give more details about how this persistent memory pool will be
> > > utilized in a actual scenario? I mean, what problem have you met so that
> > > you have to introduce persistent memory pool to solve it?
> > > 
> > 
> > The major reason we have at the moment, is that Linux root partition
> > running on top of the Microsoft hypervisor needs to deposit pages to
> > hypervisor in runtime, when hypervisor runs out of memory.
> > "Depositing" here means, that Linux passes a set of its PFNs to the
> > hypervisor via hypercall, and hypervisor then uses these pages for its
> > own needs.
> > 
> > Once deposited, these pages can't be accessed by Linux anymore and thus
> > must be preserved in "used" state across kexec, as hypervisor state is
> > unware of kexec. In the same time, these pages can we withdrawn when
> > usused. Thus, an allocator persistent across kexec looks reasonable for
> > this particular matter.
> 
> Thanks for these details.
>  
> The deposit and withdraw remind me the Balloon driver, David's virtio-mem,
> DLPAR on ppc which can hot increasing or shrinking phisical memory on guest
> OS. Can't microsoft hypervisor do the similar thing to reclaim or give
> back the memory from or to the 'Linux root partition' running on top of
> the hypervisor?
> 

Although Microsoft hypervisor is a type 1 hypervisor and runs on the
physical hardware, like Xen, it doens't control all the memory, but is
rather granted with memory by either boot loader or by Linux root
partition (similar priveleged VM is called "Dom0" in Xen world). IOW,
this works in the oposite direction: Linux gives memory to hypervisor,
and can reclaim it back. However, doing so on kexec increases downtime
as withdrawn pages must be deposited back again after booting to restore
the guests ("DomU" in Xen terminology).

It worth mentionining, that the "deposited pages" in this context don't
mean guest pages, but the pages required by the hypevisor to store Linux
root partition state user to control guest partitions.

Also, pages reclaim is not possible, if guests are left running during
kexec, as hypervisor requires to keep the Linux root partition-related
state intact to keep the guest state consistent.

> Thanks
> Baoquan

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Dave Hansen
On 9/27/23 09:13, Stanislav Kinsburskii wrote:
> Once deposited, these pages can't be accessed by Linux anymore and thus
> must be preserved in "used" state across kexec, as hypervisor state is
> unware of kexec.

If Linux can't access them, they're not RAM any more.  I'd much rather
remove them from the memory map and move on with life rather than
implement a bunch of new ABI that's got to be handed across kernels.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-28 Thread Baoquan He
On 09/27/23 at 09:13am, Stanislav Kinsburskii wrote:
> On Wed, Sep 27, 2023 at 01:44:38PM +0800, Baoquan He wrote:
> > Hi Stanislav,
> > 
> > On 09/25/23 at 02:27pm, Stanislav Kinsburskii wrote:
> > > This patch introduces a memory allocator specifically tailored for
> > > persistent memory within the kernel. The allocator maintains
> > > kernel-specific states like DMA passthrough device states, IOMMU state, 
> > > and
> > > more across kexec.
> > 
> > Can you give more details about how this persistent memory pool will be
> > utilized in a actual scenario? I mean, what problem have you met so that
> > you have to introduce persistent memory pool to solve it?
> > 
> 
> The major reason we have at the moment, is that Linux root partition
> running on top of the Microsoft hypervisor needs to deposit pages to
> hypervisor in runtime, when hypervisor runs out of memory.
> "Depositing" here means, that Linux passes a set of its PFNs to the
> hypervisor via hypercall, and hypervisor then uses these pages for its
> own needs.
> 
> Once deposited, these pages can't be accessed by Linux anymore and thus
> must be preserved in "used" state across kexec, as hypervisor state is
> unware of kexec. In the same time, these pages can we withdrawn when
> usused. Thus, an allocator persistent across kexec looks reasonable for
> this particular matter.

Thanks for these details.
 
The deposit and withdraw remind me the Balloon driver, David's virtio-mem,
DLPAR on ppc which can hot increasing or shrinking phisical memory on guest
OS. Can't microsoft hypervisor do the similar thing to reclaim or give
back the memory from or to the 'Linux root partition' running on top of
the hypervisor?

Thanks
Baoquan


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-27 Thread Stanislav Kinsburskii
On Wed, Sep 27, 2023 at 01:44:38PM +0800, Baoquan He wrote:
> Hi Stanislav,
> 
> On 09/25/23 at 02:27pm, Stanislav Kinsburskii wrote:
> > This patch introduces a memory allocator specifically tailored for
> > persistent memory within the kernel. The allocator maintains
> > kernel-specific states like DMA passthrough device states, IOMMU state, and
> > more across kexec.
> 
> Can you give more details about how this persistent memory pool will be
> utilized in a actual scenario? I mean, what problem have you met so that
> you have to introduce persistent memory pool to solve it?
> 

The major reason we have at the moment, is that Linux root partition
running on top of the Microsoft hypervisor needs to deposit pages to
hypervisor in runtime, when hypervisor runs out of memory.
"Depositing" here means, that Linux passes a set of its PFNs to the
hypervisor via hypercall, and hypervisor then uses these pages for its
own needs.

Once deposited, these pages can't be accessed by Linux anymore and thus
must be preserved in "used" state across kexec, as hypervisor state is
unware of kexec. In the same time, these pages can we withdrawn when
usused. Thus, an allocator persistent across kexec looks reasonable for
this particular matter.

Also, the last patch in the series is aimed to demonstrate the usage,
described above.

Thanks,
Stanislav

> Thanks
> Baoquan
> 
> > 
> > The current implementation provides a foundation for custom solutions that
> > may be developed in the future. Although the design is kept concise and
> > straightforward to encourage discussion and feedback, it remains fully
> > functional.
> > 
> > The persistent memory pool builds upon the continuous memory allocator
> > (CMA) and ensures CMA state persistency across kexec by incorporating the
> > CMA bitmap into the memory region instead of allocation it from kernel
> > memory.
> > 
> > Persistent memory pool metadata is passed across kexec by using Flattened
> > Device Tree, which is added as another kexec segment for x86 architecture.
> > 
> > Potential applications include:
> > 
> >   1. Enabling various in-kernel entities to allocate persistent pages from
> >  a unified memory pool, obviating the need for reserving multiple
> >  regions.
> > 
> >   2. For in-kernel components that need the allocation address to be
> >  retained on kernel kexec, this address can be exposed to user space
> >  and subsequently passed through the command line.
> > 
> >   3. Distinct subsystems or drivers can set aside their region, allocating
> >  a segment for their persistent memory pool, suitable for uses such as
> >  file systems, key-value stores, and other applications.
> > 
> > Notes:
> > 
> >   1. The last patch of the series represents a use case for the feature.
> >  However, the patch won't compile and is for illustrative purposes only
> >  as the code being patched hasn't been merged yet.
> > 
> >   2. The code being patched is currently under review by the community. The
> >  series is named "Introduce /dev/mshv drivers":
> > 
> >  https://lkml.org/lkml/2023/9/22/1117
> > 
> > 
> > Changes since v1:
> > 
> >   1. Persistent memory pool is now a wrapper on top of CMA instead of being 
> > a
> >  new allocator.
> > 
> >   2. Persistent memory pool metadata doesn't belong to the pool anymore and
> >  is now passed via Flattened Device Tree instead over kexec to the new
> >  kernel.
> > 
> > The following series implements...
> > 
> > ---
> > 
> > Stanislav Kinsburskii (7):
> >   kexec_file: Add fdt modification callback support
> >   x86: kexec: Transfer existing fdt to the new kernel
> >   x86: kexec: Enable fdt modification in callbacks
> >   pmpool: Introduce persistent memory pool
> >   pmpool: Update device tree on kexec
> >   pmpool: Restore state from device tree post-kexec
> >   Drivers: hv: Allocate persistent pages for root partition
> > 
> > 
> >  arch/x86/Kconfig  |   16 +++
> >  arch/x86/kernel/kexec-bzimage64.c |   97 +
> >  drivers/hv/hv_common.c|   13 ++
> >  include/linux/kexec.h |7 +
> >  include/linux/pmpool.h|   22 
> >  kernel/kexec_file.c   |   24 
> >  mm/Kconfig|9 ++
> >  mm/Makefile   |1 
> >  mm/pmpool.c   |  208 
> > +
> >  9 files changed, 394 insertions(+), 3 deletions(-)
> >  create mode 100644 include/linux/pmpool.h
> >  create mode 100644 mm/pmpool.c
> > 
> > 
> > ___
> > kexec mailing list
> > kexec@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/kexec
> > 

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC PATCH v2 0/7] Introduce persistent memory pool

2023-09-27 Thread Baoquan He
Hi Stanislav,

On 09/25/23 at 02:27pm, Stanislav Kinsburskii wrote:
> This patch introduces a memory allocator specifically tailored for
> persistent memory within the kernel. The allocator maintains
> kernel-specific states like DMA passthrough device states, IOMMU state, and
> more across kexec.

Can you give more details about how this persistent memory pool will be
utilized in a actual scenario? I mean, what problem have you met so that
you have to introduce persistent memory pool to solve it?

Thanks
Baoquan

> 
> The current implementation provides a foundation for custom solutions that
> may be developed in the future. Although the design is kept concise and
> straightforward to encourage discussion and feedback, it remains fully
> functional.
> 
> The persistent memory pool builds upon the continuous memory allocator
> (CMA) and ensures CMA state persistency across kexec by incorporating the
> CMA bitmap into the memory region instead of allocation it from kernel
> memory.
> 
> Persistent memory pool metadata is passed across kexec by using Flattened
> Device Tree, which is added as another kexec segment for x86 architecture.
> 
> Potential applications include:
> 
>   1. Enabling various in-kernel entities to allocate persistent pages from
>  a unified memory pool, obviating the need for reserving multiple
>  regions.
> 
>   2. For in-kernel components that need the allocation address to be
>  retained on kernel kexec, this address can be exposed to user space
>  and subsequently passed through the command line.
> 
>   3. Distinct subsystems or drivers can set aside their region, allocating
>  a segment for their persistent memory pool, suitable for uses such as
>  file systems, key-value stores, and other applications.
> 
> Notes:
> 
>   1. The last patch of the series represents a use case for the feature.
>  However, the patch won't compile and is for illustrative purposes only
>  as the code being patched hasn't been merged yet.
> 
>   2. The code being patched is currently under review by the community. The
>  series is named "Introduce /dev/mshv drivers":
> 
>  https://lkml.org/lkml/2023/9/22/1117
> 
> 
> Changes since v1:
> 
>   1. Persistent memory pool is now a wrapper on top of CMA instead of being a
>  new allocator.
> 
>   2. Persistent memory pool metadata doesn't belong to the pool anymore and
>  is now passed via Flattened Device Tree instead over kexec to the new
>  kernel.
> 
> The following series implements...
> 
> ---
> 
> Stanislav Kinsburskii (7):
>   kexec_file: Add fdt modification callback support
>   x86: kexec: Transfer existing fdt to the new kernel
>   x86: kexec: Enable fdt modification in callbacks
>   pmpool: Introduce persistent memory pool
>   pmpool: Update device tree on kexec
>   pmpool: Restore state from device tree post-kexec
>   Drivers: hv: Allocate persistent pages for root partition
> 
> 
>  arch/x86/Kconfig  |   16 +++
>  arch/x86/kernel/kexec-bzimage64.c |   97 +
>  drivers/hv/hv_common.c|   13 ++
>  include/linux/kexec.h |7 +
>  include/linux/pmpool.h|   22 
>  kernel/kexec_file.c   |   24 
>  mm/Kconfig|9 ++
>  mm/Makefile   |1 
>  mm/pmpool.c   |  208 
> +
>  9 files changed, 394 insertions(+), 3 deletions(-)
>  create mode 100644 include/linux/pmpool.h
>  create mode 100644 mm/pmpool.c
> 
> 
> ___
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
> 


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec