Le 31/03/2026 à 13:07, Andrea Mayer a écrit :
> Add a selftest that verifies the dst_cache in seg6 lwtunnel is not
> shared between the input (forwarding) and output (locally generated)
> paths.
> 
> The test creates three namespaces (ns_src, ns_router, ns_dst)
> connected in a line. An SRv6 encap route on ns_router encapsulates
> traffic destined to cafe::1 with SID fc00::100. The SID is
> reachable only for forwarded traffic (from ns_src) via an ip rule
> matching the ingress interface (iif veth-r0 lookup 100), and
> blackholed in the main table.
> 
> The test verifies that:
> 
>   1. A packet generated locally on ns_router does not reach
>      ns_dst with an empty cache, since the SID is blackholed;
>   2. A forwarded packet from ns_src populates the input cache
>      from table 100 and reaches ns_dst;
>   3. A packet generated locally on ns_router still does not
>      reach ns_dst after the input cache is populated,
>      confirming the output path does not reuse the input
>      cache entry.
> 
> Both the forwarded and local packets are pinned to the same CPU
> with taskset, since dst_cache is per-cpu.
> 
> Cc: Shuah Khan <[email protected]>
> Cc: [email protected]
> Signed-off-by: Andrea Mayer <[email protected]>
> ---

[snip]

> +test_cache_isolation()
> +{
> +     RET=0
> +
> +     # local ping with empty cache: must fail (SID is blackholed)
> +     if ip netns exec "${NS_RTR}" taskset -c 0 \
> +                     ping6 -c 1 -W 2 "${DEST}" &>/dev/null; then
> +             echo "SKIP: local ping succeeded with empty cache"
Nit: maybe the same message as the forwarding case:
"SKIP: local ping succeeded, topology broken"

> +             exit "${ksft_skip}"
> +     fi
> +
> +     # forward from ns_src to populate the input cache
> +     if ! ip netns exec "${NS_SRC}" taskset -c 0 \
> +                     ping6 -c 1 -W 2 "${DEST}" &>/dev/null; then
> +             echo "SKIP: forwarded ping failed, topology broken"
> +             exit "${ksft_skip}"
> +     fi
> +
[snip]

After that:
Reviewed-by: Nicolas Dichtel <[email protected]>

Reply via email to