On 11/5/24 6:02 PM, Omid Ehtemam-Haghighi wrote:
> Soft lockups have been observed on a cluster of Linux-based edge routers
> located in a highly dynamic environment. Using the `bird` service, these
> routers continuously update BGP-advertised routes due to frequently
> changing nexthop destinations, while also managing significant IPv6
> traffic. The lockups occur during the traversal of the multipath
> circular linked-list in the `fib6_select_path` function, particularly
> while iterating through the siblings in the list. The issue typically
> arises when the nodes of the linked list are unexpectedly deleted
> concurrently on a different core—indicated by their 'next' and
> 'previous' elements pointing back to the node itself and their reference
> count dropping to zero. This results in an infinite loop, leading to a
> soft lockup that triggers a system panic via the watchdog timer.
> 
> Apply RCU primitives in the problematic code sections to resolve the
> issue. Where necessary, update the references to fib6_siblings to
> annotate or use the RCU APIs.
> 
> Include a test script that reproduces the issue. The script
> periodically updates the routing table while generating a heavy load
> of outgoing IPv6 traffic through multiple iperf3 clients. It
> consistently induces infinite soft lockups within a couple of minutes.
> 
> Kernel log:
> 
>  0 [ffffbd13003e8d30] machine_kexec at ffffffff8ceaf3eb
>  1 [ffffbd13003e8d90] __crash_kexec at ffffffff8d0120e3
>  2 [ffffbd13003e8e58] panic at ffffffff8cef65d4
>  3 [ffffbd13003e8ed8] watchdog_timer_fn at ffffffff8d05cb03
>  4 [ffffbd13003e8f08] __hrtimer_run_queues at ffffffff8cfec62f
>  5 [ffffbd13003e8f70] hrtimer_interrupt at ffffffff8cfed756
>  6 [ffffbd13003e8fd0] __sysvec_apic_timer_interrupt at ffffffff8cea01af
>  7 [ffffbd13003e8ff0] sysvec_apic_timer_interrupt at ffffffff8df1b83d
> -- <IRQ stack> --
>  8 [ffffbd13003d3708] asm_sysvec_apic_timer_interrupt at ffffffff8e000ecb
>     [exception RIP: fib6_select_path+299]
>     RIP: ffffffff8ddafe7b  RSP: ffffbd13003d37b8  RFLAGS: 00000287
>     RAX: ffff975850b43600  RBX: ffff975850b40200  RCX: 0000000000000000
>     RDX: 000000003fffffff  RSI: 0000000051d383e4  RDI: ffff975850b43618
>     RBP: ffffbd13003d3800   R8: 0000000000000000   R9: ffff975850b40200
>     R10: 0000000000000000  R11: 0000000000000000  R12: ffffbd13003d3830
>     R13: ffff975850b436a8  R14: ffff975850b43600  R15: 0000000000000007
>     ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
>  9 [ffffbd13003d3808] ip6_pol_route at ffffffff8ddb030c
> 10 [ffffbd13003d3888] ip6_pol_route_input at ffffffff8ddb068c
> 11 [ffffbd13003d3898] fib6_rule_lookup at ffffffff8ddf02b5
> 12 [ffffbd13003d3928] ip6_route_input at ffffffff8ddb0f47
> 13 [ffffbd13003d3a18] ip6_rcv_finish_core.constprop.0 at ffffffff8dd950d0
> 14 [ffffbd13003d3a30] ip6_list_rcv_finish.constprop.0 at ffffffff8dd96274
> 15 [ffffbd13003d3a98] ip6_sublist_rcv at ffffffff8dd96474
> 16 [ffffbd13003d3af8] ipv6_list_rcv at ffffffff8dd96615
> 17 [ffffbd13003d3b60] __netif_receive_skb_list_core at ffffffff8dc16fec
> 18 [ffffbd13003d3be0] netif_receive_skb_list_internal at ffffffff8dc176b3
> 19 [ffffbd13003d3c50] napi_gro_receive at ffffffff8dc565b9
> 20 [ffffbd13003d3c80] ice_receive_skb at ffffffffc087e4f5 [ice]
> 21 [ffffbd13003d3c90] ice_clean_rx_irq at ffffffffc0881b80 [ice]
> 22 [ffffbd13003d3d20] ice_napi_poll at ffffffffc088232f [ice]
> 23 [ffffbd13003d3d80] __napi_poll at ffffffff8dc18000
> 24 [ffffbd13003d3db8] net_rx_action at ffffffff8dc18581
> 25 [ffffbd13003d3e40] __do_softirq at ffffffff8df352e9
> 26 [ffffbd13003d3eb0] run_ksoftirqd at ffffffff8ceffe47
> 27 [ffffbd13003d3ec0] smpboot_thread_fn at ffffffff8cf36a30
> 28 [ffffbd13003d3ee8] kthread at ffffffff8cf2b39f
> 29 [ffffbd13003d3f28] ret_from_fork at ffffffff8ce5fa64
> 30 [ffffbd13003d3f50] ret_from_fork_asm at ffffffff8ce03cbb
> 
> Fixes: 66f5d6ce53e6 ("ipv6: replace rwlock with rcu and spinlock in 
> fib6_table")
> Reported-by: Adrian Oliver <ker...@aoliver.ca>
> Signed-off-by: Omid Ehtemam-Haghighi <omid.ehtemamhaghi...@menlosecurity.com>
> Cc: David S. Miller <da...@davemloft.net>
> Cc: David Ahern <dsah...@gmail.com>
> Cc: Eric Dumazet <eduma...@google.com>
> Cc: Jakub Kicinski <k...@kernel.org>
> Cc: Paolo Abeni <pab...@redhat.com>
> Cc: Shuah Khan <sh...@kernel.org>
> Cc: Ido Schimmel <ido...@idosch.org>
> Cc: Kuniyuki Iwashima <kun...@amazon.com>
> Cc: Simon Horman <ho...@kernel.org>
> Cc: Omid Ehtemam-Haghighi <oeh.ker...@gmail.com>
> Cc: net...@vger.kernel.org
> Cc: linux-kselft...@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> ---
> v6 -> v7: 
>       * Rebased on top of 'net-next'
> 
> v5 -> v6:
>       * Adjust the comment line lengths in the test script to a maximum of
>         80 characters
>       * Change memory allocation in inet6_rt_notify from gfp_any() to 
> GFP_ATOMIC for
>         atomic allocation in non-blocking contexts, as suggested by Ido 
> Schimmel
>       * NOTE: I have executed the test script on both bare-metal servers and
>         virtualized environments such as QEMU and vng. In the case of 
> bare-metal, it
>         consistently triggers a soft lockup in under a minute on unpatched 
> kernels.
>         For the virtualized environments, an unpatched kernel compiled with 
> the
>         Ubuntu 24.04 configuration also triggers a soft lockup, though it 
> takes
>         longer; however, it did not trigger a soft lockup on kernels compiled 
> with
>         configurations provided in:
> 
>         
> https://github.com/linux-netdev/nipa/wiki/How-to-run-netdev-selftests-CI-style
> 
>         leading to potential false negatives in the test results.
> 
>         I am curious if this test can be executed on a bare-metal machine 
> within a
>         CI system, if such a setup exists, rather than in a virtualized 
> environment.
>         If that’s not possible, how can I apply a different kernel 
> configuration,
>         such as the one used in Ubuntu 24.04, for this test? Please advise.
> 
> v4 -> v5:
>       * Addressed review comments from Paolo Abeni.
>       * Added additional clarifying comments in the test script.
>       * Minor cleanup performed in the test script.
> 
> v3 -> v4:
>       * Added RCU primitives to rt6_fill_node(). I found that this function 
> is typically
>         called either with a table lock held or within 
> rcu_read_lock/rcu_read_unlock
>         pairs, except in the following call chain, where the protection is 
> unclear:
> 
>               rt_fill_node()
>               fib6_info_hw_flags_set()
>               mlxsw_sp_fib6_offload_failed_flag_set()
>               mlxsw_sp_router_fib6_event_work()
> 
>         The last function is initialized as a work item in 
> mlxsw_sp_router_fib_event()
>         and scheduled for deferred execution. I am unsure if the execution 
> context of
>         this work item is protected by any table lock or 
> rcu_read_lock/rcu_read_unlock
>         pair, so I have added the protection. Please let me know if this is 
> redundant.
> 
>       * Other review comments addressed
> 
> v2 -> v3:
>       * Removed redundant rcu_read_lock()/rcu_read_unlock() pairs
>       * Revised the test script based on Ido Schimmel's feedback
>       * Updated the test script to ensure compatibility with the latest 
> iperf3 version
>       * Fixed new warnings generated with 'C=2' in the previous version
>       * Other review comments addressed
> 
> v1 -> v2:
>       * list_del_rcu() is applied exclusively to legacy multipath code
>       * All occurrences of fib6_siblings have been modified to utilize RCU
>         APIs for annotation and usage.
>       * Additionally, a test script for reproducing the reported
>         issue is included
> ---
>  net/ipv6/ip6_fib.c                            |   8 +-
>  net/ipv6/route.c                              |  45 ++-
>  tools/testing/selftests/net/Makefile          |   1 +
>  .../net/ipv6_route_update_soft_lockup.sh      | 262 ++++++++++++++++++
>  4 files changed, 297 insertions(+), 19 deletions(-)
>  create mode 100755 
> tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh
> 


Reviewed-by: David Ahern <dsah...@kernel.org>


Reply via email to