Hi Tyrel,
> The pci_bus->bridge reference may no longer be valid after
> pci_bus_remove() resulting in passing a bad value to device_unregister()
> for the associated bridge device.
>
> Store the host_bridge reference in a separate variable prior to
> pci_bus_remove().
>
The patch certainly seems
27;s no obvious macros in that file that might construct the name
of the function in a way that is hidden from grep.
All in all, I am fairly confident that the function is indeed not used.
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> -
> static inline struct qe_ic *qe_ic_from_irq_data(struct irq_data *d)
> {
> return irq_data_get_irq_chip_data(d);
> --
> 1.8.3.1
Hi,
Thanks for your contribution to the kernel!
I notice that your patch is sumbitted as an attachment. In future,
please could you submit your patch inline, rather than as an attachment?
See https://www.kernel.org/doc/html/v4.15/process/5.Posting.html
I'd recommend you use git send-email if poss
Hi Christophe,
> Which hugepd, page table entries can be at any level
> and can be of any size.
>
> Add support for them.
>
> Signed-off-by: Christophe Leroy
> ---
> mm/ptdump.c | 17 +++--
> 1 file changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/mm/ptdump.c b/mm/ptdump.c
Hi Christophe,
> static void note_page(struct ptdump_state *pt_st, unsigned long addr, int
> level,
> - u64 val)
> + u64 val, unsigned long page_size)
Compilers can warn about unused parameters at -Wextra level. However,
reading scripts/Makefile.extrawarn it
Hi Christophe,
> Pagewalk ignores hugepd entries and walk down the tables
> as if it was traditionnal entries, leading to crazy result.
>
> Add walk_hugepd_range() and use it to walk hugepage tables.
>
> Signed-off-by: Christophe Leroy
> ---
> mm/pagewalk.c | 54 +
Daniel Axtens writes:
It looks like the kernel test robot also reported this:
"[PATCH] powerpc/iommu/debug: fix ifnullfree.cocci warnings"
Weirdly I don't see it in patchwork.
I'm not sure which one mpe will want to take but either would do.
>> Fix the following cocci
omment
> format, i.e. '/*', to prevent kernel-doc from parsing it.
This makes sense.
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
>
> Signed-off-by: Aditya Srivastava
> ---
> * Applies perfectly on next-20210319
>
> drivers/crypto/vmx/aes.c | 2 +-
>
f you have a number of similar fixes it might be helpful to do them in
a single bigger patch, but I'm not sure if coccicheck reports much else
as I don't have coccinelle installed at the moment.
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
at line 19, so remove the
> duplicate one at line 23.
For this one I checked the file. Indeed there is another inttypes.h, so
this is also correct.
Again, all the automated checks pass. (although I don't think any of the
automated builds include selftests.)
So:
Reviewed-by: Daniel Axtens
ate between hugetlb and THP during page
walk"). How odd!
Anyway, all of these look good to me, and the automated checks at
http://patchwork.ozlabs.org/project/linuxppc-dev/patch/20210323062916.295346-1-wanjiab...@vivo.com/
have all passed.
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
Out of interest, what tool are you using
to find these unused inlines? If there are many more, it might make
sense to combine future patches removing them into a single patch, but
I'm not sure.
checkpatch likes this patch, so that's also good :)
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
ework the secondary inhibit
code")
I don't think this warrants another revision, I think leaving the commit
name on one line makes sense.
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> Signed-off-by: YueHaibing
> ---
> arch/powerpc/include/asm/smp.h | 2 --
> 1 f
e file, so there's nothing else we'd
really want to clean up while you're doing cleanups.
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
>
> Signed-off-by: Randy Dunlap
> Cc: Michael Ellerman
> Cc: linuxppc-...@lists.ozlabs.org
> ---
> tools/testing/self
Daniel Axtens writes:
> Hi Christophe,
>
>> Commit 6bfd93c32a50 ("powerpc: Fix incorrect might_sleep in
>> __get_user/__put_user on kernel addresses") added a check to not call
>> might_sleep() on kernel addresses. This was to enable the use of
>> __get_u
Hi Christophe,
> Commit 6bfd93c32a50 ("powerpc: Fix incorrect might_sleep in
> __get_user/__put_user on kernel addresses") added a check to not call
> might_sleep() on kernel addresses. This was to enable the use of
> __get_user() in the alignment exception handler for any address.
>
> Then commit
serspace access.
>
> In alignment exception handler, use probe_kernel_read_inst()
> instead of __get_user_instr() for reading instructions in kernel.
>
> This will allow to remove the is_kernel_addr() check in
> __get/put_user() in a following patch.
>
Looks good to me!
Revie
Hi Christophe,
> Those helpers use get_user helpers but they don't participate
> in their implementation, so they do not belong to asm/uaccess.h
>
> Move them in asm/inst.h
Hmm, is asm/inst.h the right place for this?
asm/inst.h seems to be entirely concerned with the ppc_inst type:
converting t
Hi Christophe,
> In the discussion we had long time ago,
> https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20190806233827.16454-5-...@axtens.net/#2321067
>
> , I challenged you on why it was not possible to implement things the same
> way as other
> architectures, in extenso with an e
Balbir Singh writes:
> On Mon, Mar 22, 2021 at 11:55:08AM +1100, Daniel Axtens wrote:
>> Hi Balbir,
>>
>> > Could you highlight the changes from
>> > https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20170729140901.5887-1-bsinghar...@gmail.com/?
>> &
Hi Balbir,
> Could you highlight the changes from
> https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20170729140901.5887-1-bsinghar...@gmail.com/?
>
> Feel free to use my signed-off-by if you need to and add/update copyright
> headers if appropriate.
There's not really anything in common a
Balbir Singh writes:
> On Sat, Mar 20, 2021 at 01:40:53AM +1100, Daniel Axtens wrote:
>> For annoying architectural reasons, it's very difficult to support inline
>> instrumentation on powerpc64.
>
> I think we can expand here and talk about how in hash mode, the vmall
KASAN is supported on 32-bit powerpc and the docs should reflect this.
Suggested-by: Christophe Leroy
Reviewed-by: Christophe Leroy
Signed-off-by: Daniel Axtens
---
Documentation/dev-tools/kasan.rst | 8 ++--
Documentation/powerpc/kasan.txt | 12
2 files changed, 18
kasan is already implied by the directory name, we don't need to
repeat it.
Suggested-by: Christophe Leroy
Signed-off-by: Daniel Axtens
---
arch/powerpc/mm/kasan/Makefile | 2 +-
arch/powerpc/mm/kasan/{kasan_init_32.c => init_32.c} | 0
2 files changed, 1 inserti
Maybe we can do better in
the future.
Cc: Balbir Singh # ppc64 out-of-line radix version
Cc: Aneesh Kumar K.V # ppc64 hash version
Cc: Christophe Leroy # ppc32 version
Signed-off-by: Daniel Axtens
---
Documentation/dev-tools/kasan.rst| 11 +--
Documentation/powerpc/kasan.txt
n', rather than 'y'.)
We also disable stack instrumentation in this case as it does things that
are functionally equivalent to inline instrumentation, namely adding
code that touches the shadow directly without going through a C helper.
Signed-off-by: Daniel Axtens
---
lib/
rk in outline mode, so an arch must specify
ARCH_DISABLE_KASAN_INLINE if it requires this.
Cc: Balbir Singh
Cc: Aneesh Kumar K.V
Suggested-by: Christophe Leroy
Signed-off-by: Daniel Axtens
--
I discuss the justfication for this later in the series. Also,
both previous RFCs for ppc64 - by 2 diff
moment, just define them in the kasan
header, and have them default to PTRS_PER_* unless overridden in arch
code.
Suggested-by: Christophe Leroy
Suggested-by: Balbir Singh
Reviewed-by: Christophe Leroy
Reviewed-by: Balbir Singh
Signed-off-by: Daniel Axtens
---
include/linux/kasan.h | 18
h 6.
kexec works. Both 64k and 4k pages work. Running as a KVM host works, but
nothing in arch/powerpc/kvm is instrumented. It's also potentially a bit
fragile - if any real mode code paths call out to instrumented code, things
will go boom.
Kind regards,
Daniel
Daniel Axtens (6):
kasan:
"heying (H)" writes:
> Thank you for your reply.
>
>
> 在 2021/3/17 11:04, Daniel Axtens 写道:
>> Hi He Ying,
>>
>> Thank you for this patch.
>>
>> I'm not sure what the precise rules for Fixes are, but I wonder if this
>> should have
Hi He Ying,
Thank you for this patch.
I'm not sure what the precise rules for Fixes are, but I wonder if this
should have:
Fixes: 9a32a7e78bd0 ("powerpc/64s: flush L1D after user accesses")
Fixes: f79643787e0a ("powerpc/64s: flush L1D on kernel entry")
Those are the commits that added the entry
can be used in atomic parts
> of the code, therefore __get/put_user_inatomic() have become useless.
>
> Remove __get_user_inatomic() and __put_user_inatomic().
>
This makes much more sense, thank you!
Simplifying uaccess.h is always good to me :)
Reviewed-by: Daniel Axtens
Kind
t_read:
Checkpatch complains that this is CamelCase, which seems like a
checkpatch problem. Efault_{read,write} seem like good labels to me.
(You don't need to change anything, I just like to check the checkpatch
results when reviewing a patch.)
> + user_read_access_end();
> + return -EFAULT;
> +
> +Efault_write:
> + user_write_access_end();
> + return -EFAULT;
> }
> #endif /* CONFIG_SPE */
>
With the user_write_access_begin change:
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
Hi Christophe,
Thanks for the answers to my questions on v1.
This all looks good to me.
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> Those two macros have only one user which is unsafe_get_user().
>
> Put everything in one place and remove them.
>
> Signed-off-by: C
Christophe Leroy writes:
> Since commit 662bbcb2747c ("mm, sched: Allow uaccess in atomic with
> pagefault_disable()"), __get/put_user() can be used in atomic parts
> of the code, therefore the __get/put_user_inatomic() introduced
> by commit e68c825bb016 ("[POWERPC] Add inatomic versions of __ge
ocumenting what I
considered while reviewing your patch.)
As such:
Reviewed-by: Daniel Axtens
Kind regards,
Daniel
> -
> extern long __put_user_bad(void);
>
> #define __put_user_size(x, ptr, size, retval)\
> --
> 2.25.0
Christophe Leroy writes:
> Those two macros have only one user which is unsafe_get_user().
>
> Put everything in one place and remove them.
>
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/include/asm/uaccess.h | 10 +-
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> di
Hi Yang,
> This eliminates the following coccicheck warning:
> ./arch/powerpc/boot/mktree.c:130:31-32: WARNING: Use ARRAY_SIZE
>
> Reported-by: Abaci Robot
> Signed-off-by: Yang Li
> ---
> arch/powerpc/boot/mktree.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/p
Christophe Leroy writes:
> Le 03/02/2021 à 12:59, Daniel Axtens a écrit :
>> Implement a limited form of KASAN for Book3S 64-bit machines running under
>> the Radix MMU, supporting only outline mode.
>>
>
>> diff --git a/arch/powerpc/kernel/process.c b/arch/pow
Maybe we can do better in
the future.
Cc: Balbir Singh # ppc64 out-of-line radix version
Cc: Aneesh Kumar K.V # ppc64 hash version
Cc: Christophe Leroy # ppc32 version
Signed-off-by: Daniel Axtens
---
Documentation/dev-tools/kasan.rst| 9 +-
Documentation/powerpc/kasan.txt
kasan is already implied by the directory name, we don't need to
repeat it.
Suggested-by: Christophe Leroy
Signed-off-by: Daniel Axtens
---
arch/powerpc/mm/kasan/Makefile | 2 +-
arch/powerpc/mm/kasan/{kasan_init_32.c => init_32.c} | 0
2 files changed, 1 inserti
KASAN is supported on 32-bit powerpc and the docs should reflect this.
Document s390 support while we're at it.
Suggested-by: Christophe Leroy
Reviewed-by: Christophe Leroy
Signed-off-by: Daniel Axtens
---
Documentation/dev-tools/kasan.rst | 7 +--
Documentation/powerpc/kasan.txt
moment, just define them in the kasan
header, and have them default to PTRS_PER_* unless overridden in arch
code.
Suggested-by: Christophe Leroy
Suggested-by: Balbir Singh
Reviewed-by: Christophe Leroy
Reviewed-by: Balbir Singh
Signed-off-by: Daniel Axtens
---
include/linux/kasan.h | 18
rk in outline mode, so an arch must specify
ARCH_DISABLE_KASAN_INLINE if it requires this.
Cc: Balbir Singh
Cc: Aneesh Kumar K.V
Suggested-by: Christophe Leroy
Signed-off-by: Daniel Axtens
--
I discuss the justfication for this later in the series. Also,
both previous RFCs for ppc64 - by 2 diff
n', rather than 'y'.)
We also disable stack instrumentation in this case as it does things that
are functionally equivalent to inline instrumentation, namely adding
code that touches the shadow directly without going through a C helper.
Signed-off-by: Daniel Axtens
---
lib/
Building on the work of Christophe, Aneesh and Balbir, I've ported
KASAN to 64-bit Book3S kernels running on the Radix MMU.
v10 rebases on top of next-20210125, fixing things up to work on top
of the latest changes, and fixing some review comments from
Christophe. I have tested host and guest with
Hi Christophe,
>> select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 &&
>> PPC_RADIX_MMU
>> select HAVE_ARCH_JUMP_LABEL
>> select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14
>> -select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14
do_filp_open+0x48/0xa4
> ? kmem_cache_alloc+0xf5/0x16e
> ? __clear_close_on_exec+0x13/0x22
> ? _raw_spin_unlock+0xa/0xb
> do_sys_openat2+0x72/0xde
> do_sys_open+0x3b/0x58
> do_syscall_64+0x2d/0x3a
> entry_SYSCALL
t;> Fixes: 36dadef23fcc ("kprobes: Init kprobes in early_initcall")
>> Signed-off-by: Uladzislau Rezki (Sony)
Tested-by: Daniel Axtens
>> ---
>> include/linux/rcupdate.h | 6 ++
>> init/main.c | 1 +
>> kernel/rcu/tasks.h | 26 +
KASAN is supported on 32-bit powerpc and the docs should reflect this.
Document s390 support while we're at it.
Suggested-by: Christophe Leroy
Reviewed-by: Christophe Leroy
Signed-off-by: Daniel Axtens
---
Documentation/dev-tools/kasan.rst | 7 +--
Documentation/powerpc/kasan.txt
moment, just define them in the kasan
header, and have them default to PTRS_PER_* unless overridden in arch
code.
Suggested-by: Christophe Leroy
Suggested-by: Balbir Singh
Reviewed-by: Christophe Leroy
Reviewed-by: Balbir Singh
Signed-off-by: Daniel Axtens
---
include/linux/kasan.h | 18
Maybe we can do better in
the future.
Cc: Balbir Singh # ppc64 out-of-line radix version
Cc: Aneesh Kumar K.V # ppc64 hash version
Cc: Christophe Leroy # ppc32 version
Signed-off-by: Daniel Axtens
---
Documentation/dev-tools/kasan.rst| 9 +-
Documentation/powerpc/kasan.txt
kasan is already implied by the directory name, we don't need to
repeat it.
Suggested-by: Christophe Leroy
Signed-off-by: Daniel Axtens
---
arch/powerpc/mm/kasan/Makefile | 2 +-
arch/powerpc/mm/kasan/{kasan_init_32.c => init_32.c} | 0
2 files changed, 1 inserti
rk in outline mode, so an arch must specify
HAVE_ARCH_NO_KASAN_INLINE if it requires this.
Cc: Balbir Singh
Cc: Aneesh Kumar K.V
Signed-off-by: Christophe Leroy
Signed-off-by: Daniel Axtens
--
I discuss the justfication for this later in the series. Also,
both previous RFCs for ppc64 - by 2 diff
;n', rather than 'y'.)
Signed-off-by: Daniel Axtens
---
lib/Kconfig.kasan | 4
1 file changed, 4 insertions(+)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 542a9c18398e..31a0b28f6c2b 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -9,6 +
es. (I suspect this may have previously worked if the code
ended up in .ctors rather than .init_array but I don't keep my old binaries
around so I have no real way of checking.)
Daniel Axtens (6):
kasan: allow an architecture to disable inline instrumentation
kasan: allow architectur
Thanks sfr and mpe.
> Applied to powerpc/fixes.
>
> [1/1] powerpc/64s: Fix allnoconfig build since uaccess flush
>
> https://git.kernel.org/powerpc/c/b6b79dd53082db11070b4368d85dd6699ff0b063
We also needed a similar fix for stable, which has also been applied.
I guess I should build some
Hi,
>> Build results:
>> total: 165 pass: 164 fail: 1
>> Failed builds:
>> powerpc:ppc64e_defconfig
>> Qemu test results:
>> total: 328 pass: 323 fail: 5
>> Failed tests:
>> ppc64:ppce500:corenet64_smp_defconfig:e5500:initrd
>> ppc64:ppce500:corenet64_smp_defconfig:e5500:nv
ock/unlock
overhead.
Unscientific benchmarking with the poll2_threads microbenchmark from
will-it-scale, run as `./poll2_threads -t 1 -s 15`:
- Bare-metal Power9 with KUAP: ~49% speed-up
- VM on amd64 laptop with SMAP: ~25% speed-up
Signed-off-by: Daniel Axtens
---
fs/select.c | 10
ock/unlock
overhead.
Unscientific benchmarking with the poll2_threads microbenchmark from
will-it-scale, run as `./poll2_threads -t 1 -s 15`:
- Bare-metal Power9 with KUAP: ~49% speed-up
- VM on amd64 laptop with SMAP: ~25% speed-up
Signed-off-by: Daniel Axtens
---
v2: Use copy_to_user
Hi,
>> Seem like this could simply use a copy_to_user to further simplify
>> things?
>
> I'll benchmark it and find out.
I tried this:
for (walk = head; walk; walk = walk->next) {
- struct pollfd *fds = walk->entries;
- int j;
-
- for (j = 0; j <
Christoph Hellwig writes:
> On Thu, Aug 13, 2020 at 05:11:20PM +1000, Daniel Axtens wrote:
>> When returning results to userspace, do_sys_poll repeatedly calls
>> put_user() - once per fd that it's watching.
>>
>> This means that on architectures that support som
ately.
Unscientific benchmarking with the poll2_threads microbenchmark from
will-it-scale, run as `./poll2_threads -t 1 -s 15`:
- Bare-metal Power9 with KUAP: ~48.8% speed-up
- VM on amd64 laptop with SMAP: ~25.5% speed-up
Signed-off-by: Daniel Axtens
---
fs/select.c | 14 --
Hi Mike,
>
> The memory size calculation in kvm_cma_reserve() traverses memblock.memory
> rather than simply call memblock_phys_mem_size(). The comment in that
> function suggests that at some point there should have been call to
> memblock_analyze() before memblock_phys_mem_size() could be used.
rs to make sure no obvious
kernel bugs were exposed. Nothing crashed.
All tests done on a P8 LE guest under KVM.
On that basis:
Tested-by: Daniel Axtens
The more I look at this the less qualified I feel to Review it, but
certainly it looks better than my ugly hack from late last year.
Kind regar
Hi Michael,
I have tested this with the test from the bug and it now seems to pass
fine. On that basis:
Tested-by: Daniel Axtens
Thank you for coming up with a better solution than my gross hack!
Kind regards,
Daniel
> We have powerpc specific logic in our page fault handling to decide
Hi Michael,
> We have powerpc specific logic in our page fault handling to decide if
> an access to an unmapped address below the stack pointer should expand
> the stack VMA.
>
> The logic aims to prevent userspace from doing bad accesses below the
> stack pointer. However as long as the stack is
Hi Michael,
Unfortunately, this patch doesn't completely solve the problem.
Trying the original reproducer, I'm still able to trigger the crash even
with this patch, although not 100% of the time. (If I turn ASLR off
outside of tmux it reliably crashes, if I turn ASLR off _inside_ of tmux
it reli
suppose?
I'm not quite sure what you mean? They'll be documented in a future
revision of the PAPR, once I get my act together and submit the
relevant internal paperwork.
Daniel
>
> Thanks
>
> Michal
>> 0 - Disabled
>> 1 - Enabled
>>
>> Signed-off-b
Hi Nayna,
Looks good to me.
Sorry for not noticing this before, but I think
> +#include
is now superfluous (I think it's leftover from the machine_is
version?). Maybe mpe will take pity on you and remove it when he picks
up your patch.
Kind regards,
Daniel
>
> static struct device_node *get
Hi Nayna,
Thanks! Would you be able to fold in some of the information from my
reply to v1 into the changelog? Until we have public PAPR release with
it, that information is the extent of the public documentation. It would
be good to get it into the git log rather than just floating around in
the
ublished in future. (Currently, trusted boot state is inferred by the
> presence or absense of a vTPM.) It's simply 1 = enabled, 0 = disabled.
>
> As for this patch specifically, with the very small nits below,
>
> Reviewed-by: Daniel Axtens
>
>> -node = get_ppc_fw_s
be
published in future. (Currently, trusted boot state is inferred by the
presence or absense of a vTPM.) It's simply 1 = enabled, 0 = disabled.
As for this patch specifically, with the very small nits below,
Reviewed-by: Daniel Axtens
> - node = get_ppc_fw_sb_node();
> - enabl
moment, just define them in the kasan
header, and have them default to PTRS_PER_* unless overridden in arch
code.
Suggested-by: Christophe Leroy
Suggested-by: Balbir Singh
Reviewed-by: Christophe Leroy
Reviewed-by: Balbir Singh
Signed-off-by: Daniel Axtens
---
include/linux/kasan.h | 18
ed relocations/tramplines rather than anything to
do with KASAN.
Daniel Axtens (4):
kasan: define and use MAX_PTRS_PER_* for early shadow tables
kasan: Document support on 32-bit powerpc
powerpc/mm/kasan: rename kasan_init_32.c to init_32.c
powerpc: Book3S 64-bit "heavyweight" KASA
rman
Cc: Balbir Singh # ppc64 out-of-line radix version
Cc: Christophe Leroy # ppc32 version
Reviewed-by: # focussed mainly on Documentation
and things impacting PPC32
Signed-off-by: Daniel Axtens
---
Documentation/dev-tools/kasan.rst| 9 +-
kasan is already implied by the directory name, we don't need to
repeat it.
Suggested-by: Christophe Leroy
Signed-off-by: Daniel Axtens
---
arch/powerpc/mm/kasan/Makefile | 2 +-
arch/powerpc/mm/kasan/{kasan_init_32.c => init_32.c} | 0
2 files changed, 1 inserti
KASAN is supported on 32-bit powerpc and the docs should reflect this.
Document s390 support while we're at it.
Suggested-by: Christophe Leroy
Reviewed-by: Christophe Leroy
Signed-off-by: Daniel Axtens
---
Documentation/dev-tools/kasan.rst | 7 +--
Documentation/powerpc/kasan.txt
ps I'm too
risk-averse.
Regards,
Daniel
> From: Daniel Axtens
>
> [ Upstream commit 47227d27e2fcb01a9e8f5958d8997cf47a820afc ]
>
> The memcmp KASAN self-test fails on a kernel with both KASAN and
> FORTIFY_SOURCE.
>
> When FORTIFY_SOURCE is on, a number of functions a
Reported-by: syzbot+58320b7171734bf79...@syzkaller.appspotmail.com
>> > Reported-by: syzbot+d6074fb08bdb2e010...@syzkaller.appspotmail.com
>> > Cc: Akash Goel
>> > Cc: Andrew Donnellan # syzkaller-ppc64
>> > Reviewed-by: Michael Ellerman
>> > Reviewed-by: Andr
Provide the current number of vmalloc shadow pages in
/sys/kernel/debug/kasan/vmalloc_shadow_pages.
Signed-off-by: Daniel Axtens
---
v8: rename kasan_vmalloc/shadow_pages -> kasan/vmalloc_shadow_pages
On v4 (no dynamic freeing), I saw the following approximate figures
on my test VM:
- fr
illed dynamically.
Acked-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
v5: fix some checkpatch CHECK warnings. There are some that remain
around lines ending with '(': I have not changed these because
it's consistent with the rest of the file and it's not easy
0191001065834.8880-1-...@axtens.net/
rename kasan_vmalloc/shadow_pages to kasan/vmalloc_shadow_pages
v9: address a number of review comments for patch 1.
Daniel Axtens (5):
kasan: support backing vmalloc space with real shadow memory
kasan: add test for vmalloc
fork: support VMAP_STACK with
.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
Acked-by: Vasily Gorbik
Co-developed-by: Mark Rutland
Signed-off-by: Mark Rutland [shadow rework]
Signed-off-by: Daniel Axtens
--
[I haven't tried to resolve the question of spurious faults. My
understanding is that in order to se
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward:
- clear the shadow region of vmapped stacks when swapping them in
- tweak Kconfig to allow VMAP_STACK to be turned on with KASAN
Reviewed-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
arch/Kconfig | 9 +
kernel
Test kasan vmalloc support by adding a new test to the module.
Signed-off-by: Daniel Axtens
--
v5: split out per Christophe Leroy
---
lib/test_kasan.c | 26 ++
1 file changed, 26 insertions(+)
diff --git lib/test_kasan.c lib/test_kasan.c
index 49cc4d570a40
> There is a potential problem here, as Will Deacon wrote up at:
>
>
> https://lore.kernel.org/linux-arm-kernel/20190827131818.14724-1-w...@kernel.org/
>
> ... in the section starting:
>
> | *** Other architecture maintainers -- start here! ***
>
> ... whereby the CPU can spuriously fault on a
>>> @@ -2497,6 +2533,9 @@ void *__vmalloc_node_range(unsigned long size,
>>> unsigned long align,
>>> if (!addr)
>>> return NULL;
>>>
>>> + if (kasan_populate_vmalloc(real_size, area))
>>> + return NULL;
>>> +
>>
>> KASAN itself uses __vmalloc_node_range() to allocate
Mark Rutland writes:
> On Tue, Oct 01, 2019 at 04:58:30PM +1000, Daniel Axtens wrote:
>> Hook into vmalloc and vmap, and dynamically allocate real shadow
>> memory to back the mappings.
>>
>> Most mappings in vmalloc space are small, requiring less than a f
Hi Andrey,
>> +/*
>> + * Ensure poisoning is visible before the shadow is made visible
>> + * to other CPUs.
>> + */
>> +smp_wmb();
>
> I'm not quite understand what this barrier do and why it needed.
> And if it's really needed there should be a pairing barrier
> on the other
Hi Uladzislau,
> Looking at it one more, i think above part of code is a bit wrong
> and should be separated from merge_or_add_vmap_area() logic. The
> reason is to keep it simple and do only what it is supposed to do:
> merging or adding.
>
> Also the kasan_release_vmalloc() gets called twice th
Marco Elver writes:
> Hi Daniel,
>
> On Tue, 1 Oct 2019 at 16:50, Daniel Axtens wrote:
>>
>> Hi Marco,
>>
>> > We would like to share a new data-race detector for the Linux kernel:
>> > Kernel Concurrency Sanitizer (KCSAN) --
>> > https://gi
Hi,
>> /*
>> * Find a place in the tree where VA potentially will be
>> * inserted, unless it is merged with its sibling/siblings.
>> @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
>> if (sibling->va_end == va->va_start) {
>> si
Hi Marco,
> We would like to share a new data-race detector for the Linux kernel:
> Kernel Concurrency Sanitizer (KCSAN) --
> https://github.com/google/ktsan/wiki/KCSAN (Details:
> https://github.com/google/ktsan/blob/kcsan/Documentation/dev-tools/kcsan.rst)
This builds and begins to boot on pow
Provide the current number of vmalloc shadow pages in
/sys/kernel/debug/kasan/vmalloc_shadow_pages.
Signed-off-by: Daniel Axtens
---
v8: rename kasan_vmalloc/shadow_pages -> kasan/vmalloc_shadow_pages
On v4 (no dynamic freeing), I saw the following approximate figures
on my test VM:
- fr
illed dynamically.
Acked-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
v5: fix some checkpatch CHECK warnings. There are some that remain
around lines ending with '(': I have not changed these because
it's consistent with the rest of the file and it's not easy
Supporting VMAP_STACK with KASAN_VMALLOC is straightforward:
- clear the shadow region of vmapped stacks when swapping them in
- tweak Kconfig to allow VMAP_STACK to be turned on with KASAN
Reviewed-by: Dmitry Vyukov
Signed-off-by: Daniel Axtens
---
arch/Kconfig | 9 +
kernel
Test kasan vmalloc support by adding a new test to the module.
Signed-off-by: Daniel Axtens
--
v5: split out per Christophe Leroy
---
lib/test_kasan.c | 26 ++
1 file changed, 26 insertions(+)
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 49cc4d570a40
.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
Acked-by: Vasily Gorbik
Signed-off-by: Daniel Axtens
[Mark: rework shadow allocation]
Signed-off-by: Mark Rutland
--
v2: let kasan_unpoison_shadow deal with ranges that do not use a
full shadow byte.
v3: relax module alignment
hadow_pages to kasan/vmalloc_shadow_pages
Daniel Axtens (5):
kasan: support backing vmalloc space with real shadow memory
kasan: add test for vmalloc
fork: support VMAP_STACK with KASAN_VMALLOC
x86/kasan: support KASAN_VMALLOC
kasan debug: track pages allocated for vmalloc shadow
Documentation/dev-t
1 - 100 of 243 matches
Mail list logo