Re: [PATCH v2 0/7] KASan for arm
On Mon, Mar 19, 2018 at 2:56 AM, Liuwenliang (Abbott Liu) <liuwenli...@huawei.com> wrote: > On 03/19/2018 09:23 AM, Florian Fainelli wrote: >>On 03/18/2018 06:20 PM, Liuwenliang (Abbott Liu) wrote: >>> On 03/19/2018 03:14 AM, Florian Fainelli wrote: >>>> Thanks for posting these patches! Just FWIW, you cannot quite add >>>> someone's Tested-by for a patch series that was just resubmitted given >>>> the differences with v1. I just gave it a spin on a Cortex-A5 (no LPAE) >>>> and it looks like test_kasan.ko is passing, great job! >>> >>> I'm sorry. >>> Thanks for your testing very much! >>> I forget to add Tested-by in cover letter patch file. But I have alreadly >>> added >>> Tested-by in some of following patch. >>> In the next version I am going to add Tested-by in all patches. >> >>This is not exactly what I meant. When you submit a v2 of your patches, >>you must wait for people to give you their test results. The Tested-by >>applied to v1, and so much has changed it is no longer valid for v2 >>unless someone tells you they tested v2. Hope this is clearer. > > Ok, I understand now. thank you for your explanation. Hi Abbott, I've skimmed through the changes and they generally look good to me. I am not an expect in arm, so I did not look too closely on these parts (which is actually most of the changes). FWIW Acked-by: Dmitry Vyukov <dvyu...@google.com> Please also update set of supported archs at the top of Documentation/dev-tools/kasan.rst Thanks for working on upstreaming this! ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
kvm: deadlock in kvm_vgic_map_resources
Hello, While running syzkaller fuzzer I've got the following deadlock. On commit 9c763584b7c8911106bb77af7e648bef09af9d80. = [ INFO: possible recursive locking detected ] 4.9.0-rc6-xc2-00056-g08372dd4b91d-dirty #50 Not tainted - syz-executor/20805 is trying to acquire lock: ( >lock ){+.+.+.} , at: [< inline >] kvm_vgic_dist_destroy arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-init.c:271 [] kvm_vgic_destroy+0x34/0x250 arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-init.c:294 but task is already holding lock: (>lock){+.+.+.}, at: [] kvm_vgic_map_resources+0x2c/0x108 arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-init.c:343 other info that might help us debug this: Possible unsafe locking scenario: CPU0 lock(>lock); lock(>lock); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by syz-executor/20805: #0:(>mutex){+.+.+.}, at: [] vcpu_load+0x28/0x1d0 arch/arm64/kvm/../../../virt/kvm/kvm_main.c:143 #1:(>lock){+.+.+.}, at: [] kvm_vgic_map_resources+0x2c/0x108 arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-init.c:343 stack backtrace: CPU: 2 PID: 20805 Comm: syz-executor Not tainted 4.9.0-rc6-xc2-00056-g08372dd4b91d-dirty #50 Hardware name: Hardkernel ODROID-C2 (DT) Call trace: [] dump_backtrace+0x0/0x3c8 arch/arm64/kernel/traps.c:69 [] show_stack+0x20/0x30 arch/arm64/kernel/traps.c:219 [< inline >] __dump_stack lib/dump_stack.c:15 [] dump_stack+0x100/0x150 lib/dump_stack.c:51 [< inline >] print_deadlock_bug kernel/locking/lockdep.c:1728 [< inline >] check_deadlock kernel/locking/lockdep.c:1772 [< inline >] validate_chain kernel/locking/lockdep.c:2250 [] __lock_acquire+0x1938/0x3440 kernel/locking/lockdep.c:3335 [] lock_acquire+0xdc/0x1d8 kernel/locking/lockdep.c:3746 [< inline >] __mutex_lock_common kernel/locking/mutex.c:521 [] mutex_lock_nested+0xdc/0x7b8 kernel/locking/mutex.c:621 [< inline >] kvm_vgic_dist_destroy arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-init.c:271 [] kvm_vgic_destroy+0x34/0x250 arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-init.c:294 [] vgic_v2_map_resources+0x218/0x430 arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-v2.c:295 [] kvm_vgic_map_resources+0xcc/0x108 arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-init.c:348 [< inline >] kvm_vcpu_first_run_init arch/arm64/kvm/../../../arch/arm/kvm/arm.c:505 [] kvm_arch_vcpu_ioctl_run+0xab8/0xce0 arch/arm64/kvm/../../../arch/arm/kvm/arm.c:591 [] kvm_vcpu_ioctl+0x434/0xc08 arch/arm64/kvm/../../../virt/kvm/kvm_main.c:2557 [< inline >] vfs_ioctl fs/ioctl.c:43 [] do_vfs_ioctl+0x128/0xfc0 fs/ioctl.c:679 [< inline >] SYSC_ioctl fs/ioctl.c:694 [] SyS_ioctl+0xa8/0xb8 fs/ioctl.c:685 [] el0_svc_naked+0x24/0x28 arch/arm64/kernel/entry.S:755 INFO: task syz-executor:20805 blocked for more than 120 seconds. Not tainted 4.9.0-rc6-xc2-00056-g08372dd4b91d-dirty #50 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. syz-executor D 0 20805 1 0x0001 Call trace: [] __switch_to+0x184/0x258 arch/arm64/kernel/process.c:345 [< inline >] context_switch kernel/sched/core.c:2899 [] __schedule+0x42c/0x1298 kernel/sched/core.c:3402 [] schedule+0xc8/0x260 kernel/sched/core.c:3457 [] schedule_preempt_disabled+0x74/0x110 kernel/sched/core.c:3490 [< inline >] __mutex_lock_common kernel/locking/mutex.c:582 [] mutex_lock_nested+0x318/0x7b8 kernel/locking/mutex.c:621 [< inline >] kvm_vgic_dist_destroy arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-init.c:271 [] kvm_vgic_destroy+0x34/0x250 arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-init.c:294 [] vgic_v2_map_resources+0x218/0x430 arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-v2.c:295 [] kvm_vgic_map_resources+0xcc/0x108 arch/arm64/kvm/../../../virt/kvm/arm/vgic/vgic-init.c:348 [< inline >] kvm_vcpu_first_run_init arch/arm64/kvm/../../../arch/arm/kvm/arm.c:505 [] kvm_arch_vcpu_ioctl_run+0xab8/0xce0 arch/arm64/kvm/../../../arch/arm/kvm/arm.c:591 [] kvm_vcpu_ioctl+0x434/0xc08 arch/arm64/kvm/../../../virt/kvm/kvm_main.c:2557 [< inline >] vfs_ioctl fs/ioctl.c:43 [] do_vfs_ioctl+0x128/0xfc0 fs/ioctl.c:679 [< inline >] SYSC_ioctl fs/ioctl.c:694 [] SyS_ioctl+0xa8/0xb8 fs/ioctl.c:685 [] el0_svc_naked+0x24/0x28 arch/arm64/kernel/entry.S:755 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm