On 02/12/2019 03:31 AM, Yafang Shao wrote:
> Introuce this new op BPF_SOCK_OPS_STATS_CB for tcp_stats() such that it
> can be traced via BPF on a per socket basis.
> There's one argument in BPF_SOCK_OPS_STATS_CB, which is Linux MIB index
> LINUX_MIB_* to indicate the TCP event.
> All these Linux MIBs are defined in include/uapi/linux/snmp.h.
> 
> Signed-off-by: Yafang Shao <laoar.s...@gmail.com>
> ---
>  include/uapi/linux/bpf.h | 5 +++++
>  net/ipv4/tcp_input.c     | 1 +
>  2 files changed, 6 insertions(+)
> 
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 1777fa0..0314ddd 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -2894,6 +2894,11 @@ enum {
>       BPF_SOCK_OPS_TCP_LISTEN_CB,     /* Called on listen(2), right after
>                                        * socket transition to LISTEN state.
>                                        */
> +     BPF_SOCK_OPS_STATS_CB,          /*
> +                                      * Called on tcp_stats().
> +                                      * Arg1: Linux MIB index
> +                                      *       LINUX_MIB_*
> +                                      */
>  };
>  
>  /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> index 88deb1f..4acf458 100644
> --- a/net/ipv4/tcp_input.c
> +++ b/net/ipv4/tcp_input.c
> @@ -3557,6 +3557,7 @@ static u32 tcp_newly_delivered(struct sock *sk, u32 
> prior_delivered, int flag)
>  static void tcp_stats(struct sock *sk, int mib_idx)
>  {
>       NET_INC_STATS(sock_net(sk), mib_idx);
> +     tcp_call_bpf(sk, BPF_SOCK_OPS_STATS_CB, 1, &mib_idx);
>  }
>  
>  /* This routine deals with incoming acks, but not outgoing ones. */
> 

If the plan is to add to all NET_INC_STATS() calls in TCP an additional 
tcp_call_bpf()
I will say no.

So far, tcp_call_bpf() has not been used in fast path.

Reply via email to