From: Breno Leitao <[email protected]>
Date: Mon, 07 Apr 2025 06:40:44 -0700
> Add a tracepoint to monitor TCP send operations, enabling detailed
> visibility into TCP message transmission.
> 
> Create a new tracepoint within the tcp_sendmsg_locked function,
> capturing traditional fields along with size_goal, which indicates the
> optimal data size for a single TCP segment. Additionally, a reference to
> the struct sock sk is passed, allowing direct access for BPF programs.
> The implementation is largely based on David's patch and suggestions.
> 
> The implementation is largely based on David's patch[1] and suggestions.

nit: duplicate sentences.


> 
> Link: 
> https://lore.kernel.org/all/[email protected]/ 
> [1]
> Signed-off-by: Breno Leitao <[email protected]>
> ---
>  include/trace/events/tcp.h | 24 ++++++++++++++++++++++++
>  net/ipv4/tcp.c             |  2 ++
>  2 files changed, 26 insertions(+)
> 
> diff --git a/include/trace/events/tcp.h b/include/trace/events/tcp.h
> index 1a40c41ff8c30..cab25504c4f9d 100644
> --- a/include/trace/events/tcp.h
> +++ b/include/trace/events/tcp.h
> @@ -259,6 +259,30 @@ TRACE_EVENT(tcp_retransmit_synack,
>                 __entry->saddr_v6, __entry->daddr_v6)
>  );
>  
> +TRACE_EVENT(tcp_sendmsg_locked,
> +     TP_PROTO(const struct sock *sk, const struct msghdr *msg,
> +              const struct sk_buff *skb, int size_goal),
> +
> +     TP_ARGS(sk, msg, skb, size_goal),
> +
> +     TP_STRUCT__entry(
> +             __field(const void *, skb_addr)
> +             __field(int, skb_len)
> +             __field(int, msg_left)
> +             __field(int, size_goal)
> +     ),
> +
> +     TP_fast_assign(
> +             __entry->skb_addr = skb;
> +             __entry->skb_len = skb ? skb->len : 0;
> +             __entry->msg_left = msg_data_left(msg);
> +             __entry->size_goal = size_goal;
> +     ),
> +
> +     TP_printk("skb_addr %p skb_len %d msg_left %d size_goal %d",
> +             __entry->skb_addr, __entry->skb_len, __entry->msg_left,
> +             __entry->size_goal));
> +
>  DECLARE_TRACE(tcp_cwnd_reduction_tp,
>       TP_PROTO(const struct sock *sk, int newly_acked_sacked,
>                int newly_lost, int flag),
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index ea8de00f669d0..270ce2c8c2d54 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -1160,6 +1160,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr 
> *msg, size_t size)
>               if (skb)
>                       copy = size_goal - skb->len;
>  
> +             trace_tcp_sendmsg_locked(sk, msg, skb, size_goal);

skb could be NULL, so I think raw_tp_null_args[] needs to be updated.

Maybe try attaching a bpf prog that dereferences skb unconditionally
and see if the bpf verifier rejects it.

See this commit for the similar issue:

commit 5da7e15fb5a12e78de974d8908f348e279922ce9
Author: Kuniyuki Iwashima <[email protected]>
Date:   Fri Jan 31 19:01:42 2025 -0800

    net: Add rx_skb of kfree_skb to raw_tp_null_args[].


> +
>               if (copy <= 0 || !tcp_skb_can_collapse_to(skb)) {
>                       bool first_skb;
>  
> 
> -- 
> 2.47.1

Reply via email to