Hi Lorenzo,

On Wed, Nov 12, 2025 at 05:02:37PM +0100, Lorenzo Bianconi wrote:
[...]
> > On Fri, Nov 07, 2025 at 12:14:47PM +0100, Lorenzo Bianconi wrote:
> > [...]
> > > @@ -565,8 +622,9 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff 
> > > *skb,
> > >  
> > >   dir = tuplehash->tuple.dir;
> > >   flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
> > > + other_tuple = &flow->tuplehash[!dir].tuple;
> > >  
> > > - if (nf_flow_encap_push(skb, &flow->tuplehash[!dir].tuple) < 0)
> > > + if (nf_flow_encap_push(state->net, skb, other_tuple))
> > >           return NF_DROP;
> > >  
> > >   switch (tuplehash->tuple.xmit_type) {
> > > @@ -577,7 +635,9 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff 
> > > *skb,
> > >                   flow_offload_teardown(flow);
> > >                   return NF_DROP;
> > >           }
> > > -         neigh = ip_neigh_gw4(rt->dst.dev, rt_nexthop(rt, 
> > > flow->tuplehash[!dir].tuple.src_v4.s_addr));
> > > +         dest = other_tuple->tun_num ? other_tuple->tun.src_v4.s_addr
> > > +                                     : other_tuple->src_v4.s_addr;
> > 
> > I think this can be simplified if my series use the ip_hdr(skb)->daddr
> > for rt_nexthop(), see attached patch. This would be fetched _before_
> > pushing the tunnel and layer 2 encapsulation headers. Then, there is
> > no need to fetch other_tuple and check if tun_num is greater than
> > zero.
> > 
> > See my sketch patch, I am going to give this a try, if this is
> > correct, I would need one more iteration from you.
> >
> > diff --git a/net/netfilter/nf_flow_table_ip.c 
> > b/net/netfilter/nf_flow_table_ip.c
> > index 8b74fb34998e..ff2b6c16c715 100644
> > --- a/net/netfilter/nf_flow_table_ip.c
> > +++ b/net/netfilter/nf_flow_table_ip.c
> > @@ -427,6 +427,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
> >     struct flow_offload *flow;
> >     struct neighbour *neigh;
> >     struct rtable *rt;
> > +   __be32 ip_dst;
> >     int ret;
> >  
> >     tuplehash = nf_flow_offload_lookup(&ctx, flow_table, skb);
> > @@ -449,6 +450,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
> >  
> >     dir = tuplehash->tuple.dir;
> >     flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
> > +   ip_dst = ip_hdr(skb)->daddr;
> 
> I agree this patch will simplify my series (thx :)) but I guess we should move
> ip_dst initialization after nf_flow_encap_push() since we need to route the
> traffic according to the tunnel dst IP address, right?

Right, I made a quick edit, it looks like this:

@@ -566,9 +624,14 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
 
        dir = tuplehash->tuple.dir;
        flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
+       other_tuple = &flow->tuplehash[!dir].tuple;
+
+       if (nf_flow_tunnel_push(skb, other_tuple) < 0)
+               return NF_DROP;
+
        ip_daddr = ip_hdr(skb)->daddr;
 
-       if (nf_flow_encap_push(skb, &flow->tuplehash[!dir].tuple) < 0)
+       if (nf_flow_encap_push(skb, other_tuple) < 0)
                return NF_DROP;
 
        switch (tuplehash->tuple.xmit_type) {

That is, after tunnel header push but before pushing l2 encap (that
could possibly modify skb_network_header pointer), fetch the
destination address.

I made a few more comestic edits on your series and I pushed them out
to this branch:

https://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next.git/log/?h=flowtable-consolidate-xmit%2bipip

I just noticed, in nf_flow_tunnel_ipip_push(), that this can be removed:

        memset(IPCB(skb), 0, sizeof(*IPCB(skb)));

because this packet never entered the IP layer, the flowtable takes it
before it can get there.

Reply via email to