On man, 2017-11-20 at 17:13 +0200, Ido Schimmel wrote: > On Sun, Nov 19, 2017 at 12:45:41PM +0000, Anders K. Pedersen | > Cohaesio wrote: > > Hello, > > > > A few days ago, one of our routers (running Linux 4.13.9) crashed > > due > > to a general protection fault in dst_destroy(). At the time, it had > > run > > for several weeks without any problems, but then crashed three > > times in > > a row within a few minutes - all due to a general protection fault > > at > > dst_destroy()+0x35. Since then, it has run for several days without > > any > > further problems, so I suspect that this was triggered by a traffic > > pattern in the routed packets, but I don't have a way to reproduce > > it. > > > > Disassembly shows that this is in the inlined dev_put(), which does > > this_cpu_dec(*dev->pcpu_refcnt). As far as I can tell there haven't > > been any fixes in this area since 4.13, and a Google search didn't > > find > > anything recent, so I'm guessing this is not a known problem. > > > > I have included the kernel output via serial console below as well > > as > > gdb and objdump information. Please let me know, if I can provide > > any > > additional information. > > > > > > [2024260.461401] general protection fault: 0000 [#1] SMP > > [2024260.467193] Modules linked in: > > [2024260.470897] CPU: 15 PID: 0 Comm: swapper/15 Tainted: > > G W 4.13.9 #2 > > [2024260.479488] Hardware name: Dell Inc. PowerEdge R730/0H21J3, > > BIOS 2.5.5 08/16/2017 > > [2024260.488279] task: ffff88085b625cc0 task.stack: > > ffffc900000e4000 > > [2024260.495277] RIP: 0010:dst_destroy+0x35/0xa0 > > [2024260.500277] RSP: 0018:ffff88085f5c3f08 EFLAGS: 00010286 > > [2024260.506474] RAX: ffff88085ac0e880 RBX: ffff88082cf9fb00 RCX: > > 0000000000000020 > > [2024260.514868] RDX: ffff88082cf9fbc0 RSI: ffffffffffffffff RDI: > > ffffffff816786c0 > > [2024260.523258] RBP: 0000000000000000 R08: ffffffffffffff00 R09: > > 0000000000000000 > > [2024260.531649] R10: 0000000000000000 R11: 0000000000000000 R12: > > ffff88085f5da678 > > [2024260.540040] R13: 000000000000000a R14: ffff88085b625cc0 R15: > > ffff88085b625cc0 > > [2024260.548431] FS: 0000000000000000(0000) > > GS:ffff88085f5c0000(0000) knlGS:0000000000000000 > > [2024260.557924] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > > [2024260.564719] CR2: 00007fc800e48e88 CR3: 0000000001809000 CR4: > > 00000000001406e0 > > [2024260.573112] Call Trace: > > [2024260.576113] <IRQ> > > [2024260.578618] ? rcu_process_callbacks+0x18f/0x460 > > [2024260.584126] ? rebalance_domains+0xe2/0x290 > > [2024260.589128] ? __do_softirq+0x100/0x292 > > [2024260.593727] ? irq_exit+0x92/0xa0 > > [2024260.597729] ? smp_apic_timer_interrupt+0x39/0x50 > > [2024260.603328] ? apic_timer_interrupt+0x7c/0x90 > > [2024260.608528] </IRQ> > > [2024260.611134] ? cpuidle_enter_state+0x14c/0x2b0 > > [2024260.616432] ? cpuidle_enter_state+0x128/0x2b0 > > [2024260.621731] ? do_idle+0xf9/0x190 > > [2024260.625733] ? cpu_startup_entry+0x5f/0x70 > > [2024260.630636] ? start_secondary+0x12a/0x130 > > [2024260.635536] ? secondary_startup_64+0x9f/0x9f > > [2024260.640731] Code: f6 47 60 08 48 8b 6f 18 74 62 48 8b 43 20 48 > > 8b 40 30 48 85 c0 74 05 48 > > 89 df ff d0 48 8b 03 48 85 c0 74 0a 48 8b 80 e0 03 00 00 <65> ff 08 > > f6 43 60 80 74 26 48 8d bb > > e0 00 00 00 e8 e6 7f 01 00 > > [2024260.662626] RIP: dst_destroy+0x35/0xa0 RSP: ffff88085f5c3f08 > > [2024260.669333] ---[ end trace 3c1827251806827c ]--- > > [2024260.724173] Kernel panic - not syncing: Fatal exception in > > interrupt > > [2024261.102792] Kernel Offset: disabled > > [2024261.156022] Rebooting in 60 seconds.. > > [2024321.167958] ACPI MEMORY or I/O RESET_REG. > > This looks very similar to a bug Eric already fixed here: > https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/co > mmit/?id=222d7dbd258dad4cd5241c43ef818141fad5a87a > > I don't see it in v4.13.9 which might explain why you're still > hitting > it. Can you please try to reproduce with mentioned patch?
Yes, it looks like it could be related. I see that it is included in v4.14, so we'll update to that and see if it comes back. Thanks, Anders