On Mon, Dec 13, 2021 at 12:06:35AM +, Matthew Wilcox (Oracle) wrote:
> + /* trim iter to not go beyond EOF */
> + if (iter->count > vmcore_size - *fpos)
> + iter->count = vmcore_size - *fpos;
Nit: iov_iter_truncate()
Otherwise this looks good from a cursory view.
>
> ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
> {
> - return read_from_oldmem(buf, count, ppos, 0,
> + struct kvec kvec = { .iov_base = buf, .iov_len = count };
> + struct iov_iter iter;
> +
> + iov_iter_kvec(&iter, READ, &kvec, 1, count);
> +
> + return re
On 12/13/21 at 08:44am, Christoph Hellwig wrote:
> On Tue, Dec 07, 2021 at 11:07:47AM +0800, Baoquan He wrote:
> > In the current code, three atomic memory pools are always created,
> > atomic_pool_kernel|dma|dma32, even though 'coherent_pool=0' is
> > specified in kernel command line. In fact, ato
From: Matthew Wilcox
> Sent: 12 December 2021 11:48
>
> On Sat, Dec 11, 2021 at 05:53:46PM +, David Laight wrote:
> > From: Tiezhu Yang
> > > Sent: 11 December 2021 03:33
> > >
> > > v2:
> > > -- add copy_to_user_or_kernel() in lib/usercopy.c
> > > -- define userbuf as bool type
> >
> > In
On 12/13/21 at 09:02am, Christoph Hellwig wrote:
> >
> > ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
> > {
> > - return read_from_oldmem(buf, count, ppos, 0,
> > + struct kvec kvec = { .iov_base = buf, .iov_len = count };
> > + struct iov_iter iter;
> > +
> > + iov_iter_
On Fri, Dec 10, 2021 at 01:53:59PM -0600, john.p.donne...@oracle.com wrote:
> On 12/8/21 11:13 AM, Catalin Marinas wrote:
> > On Tue, Nov 23, 2021 at 08:46:35PM +0800, Zhen Lei wrote:
> > > Chen Zhou (10):
> > >x86: kdump: replace the hard-coded alignment with macro CRASH_ALIGN
> > >x86: kd
Hi Alexander,
@Alexander: Thanks for taking care of this.
On Wed, 8 Dec 2021 13:53:55 +0100
Alexander Egorenkov wrote:
> Starting with gcc 11.3, the C compiler will generate PLT-relative function
> calls even if they are local and do not require it. Later on during linking,
> the linker will r
Background information can be checked in cover letter of v2 RESEND POST
as below:
https://lore.kernel.org/all/20211207030750.30824-1-...@redhat.com/T/#u
Changelog:
v2-Resend -> v3:
- Re-implement has_managed_dma() according to David's suggestion.
- Add Fixes tag and cc stable.
v2->v2 RESEND:
-
In some places of the current kernel, it assumes that dma zone must have
managed pages if CONFIG_ZONE_DMA is enabled. While this is not always true.
E.g in kdump kernel of x86_64, only low 1M is presented and locked down
at very early stage of boot, so that there's no managed pages at all in
DMA zo
Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
However, it will fail if DMA zone has no managed pages. The failure
can be seen in kdump kernel of x86_64 as below:
CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
Hardware name: Intel Corporation SandyBridge Platfor
Since commit 1d659236fb43("dma-pool: scale the default DMA coherent pool
size with memory capacity"), the default size of atomic pool has been
changed to take by scaling with system memory capacity. So update the
document in kerenl-parameter.txt accordingly.
Signed-off-by: Baoquan He
---
Documen
In the current code, three atomic memory pools are always created,
atomic_pool_kernel|dma|dma32, even though 'coherent_pool=0' is
specified in kernel command line. In fact, atomic pool is only
necessary when CONFIG_DMA_DIRECT_REMAP=y or mem_encrypt_active=y
which are needed on few ARCHes.
So chang
Currently three dma atomic pools are initialized as long as the relevant
kernel codes are built in. While in kdump kernel of x86_64, this is not
right when trying to create atomic_pool_dma, because there's no managed
pages in DMA zone. In the case, DMA zone only has low 1M memory presented
and lock
On 12/10/21 at 02:55pm, Zhen Lei wrote:
> From: Chen Zhou
>
> Move CRASH_ALIGN to header asm/kexec.h for later use.
>
> Suggested-by: Dave Young
> Suggested-by: Baoquan He
I remember Dave and I discussed and suggested this when reviewing.
You can remove my Suggested-by.
For this one, I would
On Tue, Dec 07, 2021 at 11:16:31AM +0800, Baoquan He wrote:
> > This low 1M lock down is needed because AMD SME encrypts memory making
> > the old backup region mechanims impossible when switching into kdump
> > kernel. And Intel engineer mentioned their TDX (Trusted domain extensions)
> > which is
On 12/10/21 at 02:55pm, Zhen Lei wrote:
> From: Chen Zhou
>
> The lower bounds of crash kernel reservation and crash kernel low
> reservation are different, use the consistent value CRASH_ALIGN.
>
> Suggested-by: Dave Young
> Signed-off-by: Chen Zhou
> Signed-off-by: Zhen Lei
You may need ad
On Mon, Dec 13, 2021 at 09:02:57AM +0100, Christoph Hellwig wrote:
> >
> > ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
> > {
> > - return read_from_oldmem(buf, count, ppos, 0,
> > + struct kvec kvec = { .iov_base = buf, .iov_len = count };
> > + struct iov_iter iter;
> > +
Hello Baoquan. I have a question on your code.
On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> However, it will fail if DMA zone has no managed pages. The failure
> can be seen in kdump kernel of x86_64 as below:
>
On 12/13/21 at 02:25pm, Borislav Petkov wrote:
> On Tue, Dec 07, 2021 at 11:16:31AM +0800, Baoquan He wrote:
> > > This low 1M lock down is needed because AMD SME encrypts memory making
> > > the old backup region mechanims impossible when switching into kdump
> > > kernel. And Intel engineer menti
---
fs/9p/vfs_dir.c | 5 +
fs/9p/xattr.c | 6 ++
include/linux/uio.h | 9 +
lib/iov_iter.c | 32
4 files changed, 44 insertions(+), 8 deletions(-)
diff --git a/fs/9p/vfs_dir.c b/fs/9p/vfs_dir.c
index 8c854d8cb0cd..cad6c24f9f0d 100
Remove the read_from_oldmem() wrapper introduced earlier and convert
all the remaining callers to pass an iov_iter.
Signed-off-by: Matthew Wilcox (Oracle)
---
arch/x86/kernel/crash_dump_64.c | 7 +-
fs/proc/vmcore.c| 40 +
include/linux/crash_
For some reason several people have been sending bad patches to fix
compiler warnings in vmcore recently. Here's how it should be done.
Compile-tested only on x86. As noted in the first patch, s390 should
take this conversion a bit further, but I'm not inclined to do that
work myself.
v2:
- Rem
This gets rid of copy_to() and let us use proc_read_iter() instead
of proc_read().
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/proc/vmcore.c | 81 +---
1 file changed, 29 insertions(+), 52 deletions(-)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.
On 12/13/21 6:27 AM, Baoquan He wrote:
In the current code, three atomic memory pools are always created,
atomic_pool_kernel|dma|dma32, even though 'coherent_pool=0' is
specified in kernel command line. In fact, atomic pool is only
necessary when CONFIG_DMA_DIRECT_REMAP=y or mem_encrypt_active=y
On Thu, Dec 09, 2021 at 01:59:58PM +0100, Christoph Lameter wrote:
> On Thu, 9 Dec 2021, Baoquan He wrote:
>
> > > The slab allocators guarantee that all kmalloc allocations are DMA able
> > > indepent of specifying ZONE_DMA/ZONE_DMA32
> >
> > Here you mean we guarantee dma-kmalloc will be DMA abl
On 12/13/21 6:27 AM, Baoquan He wrote:
Since commit 1d659236fb43("dma-pool: scale the default DMA coherent pool
size with memory capacity"), the default size of atomic pool has been
changed to take by scaling with system memory capacity. So update the
document in kerenl-parameter.txt accordingly.
On 12/13/21 6:27 AM, Baoquan He wrote:
In some places of the current kernel, it assumes that dma zone must have
managed pages if CONFIG_ZONE_DMA is enabled. While this is not always true.
E.g in kdump kernel of x86_64, only low 1M is presented and locked down
at very early stage of boot, so that
On 12/13/21 6:27 AM, Baoquan He wrote:
Currently three dma atomic pools are initialized as long as the relevant
kernel codes are built in. While in kdump kernel of x86_64, this is not
right when trying to create atomic_pool_dma, because there's no managed
pages in DMA zone. In the case, DMA zone
On 12/13/21 6:27 AM, Baoquan He wrote:
Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
However, it will fail if DMA zone has no managed pages. The failure
can be seen in kdump kernel of x86_64 as below:
CPU: 0 PID: 65 Comm: kworker/u2:1 Not tainted 5.14.0-rc2+ #9
Hardware
On 12/10/21 12:55 AM, Zhen Lei wrote:
From: Chen Zhou
Move CRASH_ALIGN to header asm/kexec.h for later use.
Suggested-by: Dave Young
Suggested-by: Baoquan He
Signed-off-by: Chen Zhou
Signed-off-by: Zhen Lei
Tested-by: John Donnelly
Tested-by: Dave Kleikamp
>
Acked-by: John Donnelly
On 12/10/21 12:55 AM, Zhen Lei wrote:
From: Chen Zhou
To make the functions reserve_crashkernel() as generic,
replace some hard-coded numbers with macro CRASH_ADDR_LOW_MAX.
Signed-off-by: Chen Zhou
Signed-off-by: Zhen Lei
Tested-by: John Donnelly
Tested-by: Dave Kleikamp
Acked-by: Baoquan
On 12/10/21 12:55 AM, Zhen Lei wrote:
From: Chen Zhou
The lower bounds of crash kernel reservation and crash kernel low
reservation are different, use the consistent value CRASH_ALIGN.
Suggested-by: Dave Young
Signed-off-by: Chen Zhou
Signed-off-by: Zhen Lei
Tested-by: John Donnelly
Tested
On 12/10/21 12:55 AM, Zhen Lei wrote:
From: Chen Zhou
Make the functions reserve_crashkernel[_low]() as generic. Since
reserve_crashkernel[_low]() implementations are quite similar on other
architectures as well, we can have more users of this later.
So have CONFIG_ARCH_WANT_RESERVE_CRASH_KERN
On 12/10/21 12:55 AM, Zhen Lei wrote:
From: Chen Zhou
We will make the functions reserve_crashkernel() as generic, the
xen_pv_domain() check in reserve_crashkernel() is relevant only to
x86, the same as insert_resource() in reserve_crashkernel[_low]().
So move xen_pv_domain() check and insert_r
On 12/10/21 12:55 AM, Zhen Lei wrote:
From: Chen Zhou
There are following issues in arm64 kdump:
1. We use crashkernel=X to reserve crashkernel below 4G, which
will fail when there is no enough low memory.
2. If reserving crashkernel above 4G, in this case, crash dump
kernel will boot failure b
On 12/10/21 12:55 AM, Zhen Lei wrote:
From: Chen Zhou
Introduce macro CRASH_ALIGN for alignment, macro CRASH_ADDR_LOW_MAX
for upper bound of low crash memory, macro CRASH_ADDR_HIGH_MAX for
upper bound of high crash memory, use macros instead.
Besides, keep consistent with x86, use CRASH_ALIGN
On Mon, Dec 13, 2021 at 02:19:07PM +, Matthew Wilcox (Oracle) wrote:
> ---
> fs/9p/vfs_dir.c | 5 +
> fs/9p/xattr.c | 6 ++
> include/linux/uio.h | 9 +
> lib/iov_iter.c | 32
> 4 files changed, 44 insertions(+), 8 deletions(-)
On 12/10/21 12:55 AM, Zhen Lei wrote:
From: Chen Zhou
When reserving crashkernel in high memory, some low memory is reserved
for crash dump kernel devices and never mapped by the first kernel.
This memory range is advertised to crash dump kernel via DT property
under /chosen,
linux,usa
On 12/10/21 12:55 AM, Zhen Lei wrote:
Currently, we parse the "linux,usable-memory-range" property in
early_init_dt_scan_chosen(), to obtain the specified memory range of the
crash kernel. We then reserve the required memory after
early_init_dt_scan_memory() has identified all available physical
On 12/10/21 12:55 AM, Zhen Lei wrote:
From: Chen Zhou
For arm64, the behavior of crashkernel=X has been changed, which
tries low allocation in DMA zone and fall back to high allocation
if it fails.
We can also use "crashkernel=X,high" to select a high region above
DMA zone, which also tries to
This gets rid of copy_to() and let us use proc_read_iter() instead
of proc_read().
Signed-off-by: Matthew Wilcox (Oracle)
---
fs/proc/vmcore.c | 81 +---
1 file changed, 29 insertions(+), 52 deletions(-)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.
Instead of passing in a 'buf' and 'userbuf' argument, pass in an iov_iter.
s390 needs more work to pass the iov_iter down further, or refactor,
but I'd be more comfortable if someone who can test on s390 did that work.
It's more convenient to convert the whole of read_from_oldmem() to
take an iov_
For some reason several people have been sending bad patches to fix
compiler warnings in vmcore recently. Here's how it should be done.
Compile-tested only on x86. As noted in the first patch, s390 should
take this conversion a bit further, but I'm not inclined to do that
work myself.
v3:
- Sen
Remove the read_from_oldmem() wrapper introduced earlier and convert
all the remaining callers to pass an iov_iter.
Signed-off-by: Matthew Wilcox (Oracle)
---
arch/x86/kernel/crash_dump_64.c | 7 +-
fs/proc/vmcore.c| 40 +
include/linux/crash_
On 12/10/21 12:55 AM, Zhen Lei wrote:
There are following issues in arm64 kdump:
1. We use crashkernel=X to reserve crashkernel below 4G, which
will fail when there is no enough low memory.
2. If reserving crashkernel above 4G, in this case, crash dump
kernel will boot failure because there is no
On Mon, Dec 13, 2021 at 08:30:33AM +, David Laight wrote:
> From: Matthew Wilcox
> > Sent: 12 December 2021 11:48
> >
> > On Sat, Dec 11, 2021 at 05:53:46PM +, David Laight wrote:
> > > From: Tiezhu Yang
> > > > Sent: 11 December 2021 03:33
> > > >
> > > > v2:
> > > > -- add copy_to_user
Hello,
On Tue, Dec 07, 2021 at 05:10:34PM +0100, Philipp Rudo wrote:
> Hi Michal,
>
> On Thu, 25 Nov 2021 19:02:44 +0100
> Michal Suchanek wrote:
>
> > Multiple users of mod_check_sig check for the marker, then call
> > mod_check_sig, extract signature length, and remove the signature.
> >
> >
Hello,
On Sun, Dec 12, 2021 at 07:46:53PM -0500, Nayna wrote:
>
> On 11/25/21 13:02, Michal Suchanek wrote:
> > Copy the code from s390x
> >
> > Signed-off-by: Michal Suchanek
> > ---
> > arch/powerpc/Kconfig| 11 +++
> > arch/powerpc/kexec/elf_64.c | 36 +
On Fri, Dec 10, 2021 at 03:15:00PM +0800, Kefeng Wang wrote:
>
> On 2021/12/10 14:55, Zhen Lei wrote:
> > There are following issues in arm64 kdump:
> > 1. We use crashkernel=X to reserve crashkernel below 4G, which
> > will fail when there is no enough low memory.
> > 2. If reserving crashkernel
On Mon, Dec 13, 2021 at 08:37:48AM -0600, john.p.donne...@oracle.com wrote:
> On 12/10/21 12:55 AM, Zhen Lei wrote:
> > There are following issues in arm64 kdump:
> > 1. We use crashkernel=X to reserve crashkernel below 4G, which
> > will fail when there is no enough low memory.
> > 2. If reserving
On Mon, Dec 13, 2021 at 08:37:48AM -0600, john.p.donne...@oracle.com wrote:
> After 2 years, and 17 versions, can we now get this series promoted into a
> build ?
For example:
$ ./scripts/get_maintainer.pl -f Documentation/admin-guide/kdump/kdump.rst
Baoquan He (maintainer:KDUMP)
Vivek Goyal (r
> Subject: Re: [PATCH v17 01/10] x86: kdump: replace the hard-coded alignment
> with macro CRASH_ALIGN
>From Documentation/process/maintainer-tip.rst:
"Patch subject
^
The tip tree preferred format for patch subject prefixes is
'subsys/component:', e.g. 'x86/apic:', 'x86/mm/fault:'
On Mon, 13 Dec 2021 20:27:07 +0800 Baoquan He wrote:
> Background information can be checked in cover letter of v2 RESEND POST
> as below:
> https://lore.kernel.org/all/20211207030750.30824-1-...@redhat.com/T/#u
Please include all relevant info right here, in the [0/n]. For a
number of reasons,
On 12/13/21 at 01:05pm, Andrew Morton wrote:
> On Mon, 13 Dec 2021 20:27:07 +0800 Baoquan He wrote:
>
> > Background information can be checked in cover letter of v2 RESEND POST
> > as below:
> > https://lore.kernel.org/all/20211207030750.30824-1-...@redhat.com/T/#u
>
> Please include all releva
On 12/13/21 at 01:43pm, Hyeonggon Yoo wrote:
> Hello Baoquan. I have a question on your code.
>
> On Mon, Dec 13, 2021 at 08:27:12PM +0800, Baoquan He wrote:
> > Dma-kmalloc will be created as long as CONFIG_ZONE_DMA is enabled.
> > However, it will fail if DMA zone has no managed pages. The failu
55 matches
Mail list logo