Re: [PATCH net v2] rds: Fix incorrect statistics counting
From: Håkon Bugge Date: Wed, 6 Sep 2017 18:35:51 +0200 > In rds_send_xmit() there is logic to batch the sends. However, if > another thread has acquired the lock and has incremented the send_gen, > it is considered a race and we yield. The code incrementing the > s_send_lock_queue_raced statistics counter did not count this event > correctly. > > This commit counts the race condition correctly. > > Changes from v1: > - Removed check for *someone_on_xmit()* > - Fixed incorrect indentation > > Signed-off-by: Håkon Bugge > Reviewed-by: Knut Omang Applied.
Re: [PATCH net v2] rds: Fix incorrect statistics counting
On 9/6/2017 9:35 AM, Håkon Bugge wrote: In rds_send_xmit() there is logic to batch the sends. However, if another thread has acquired the lock and has incremented the send_gen, it is considered a race and we yield. The code incrementing the s_send_lock_queue_raced statistics counter did not count this event correctly. This commit counts the race condition correctly. Changes from v1: - Removed check for *someone_on_xmit()* - Fixed incorrect indentation Signed-off-by: Håkon Bugge Reviewed-by: Knut Omang --- Thanks for the update. Acked-by: Santosh Shilimkar
[PATCH net v2] rds: Fix incorrect statistics counting
In rds_send_xmit() there is logic to batch the sends. However, if another thread has acquired the lock and has incremented the send_gen, it is considered a race and we yield. The code incrementing the s_send_lock_queue_raced statistics counter did not count this event correctly. This commit counts the race condition correctly. Changes from v1: - Removed check for *someone_on_xmit()* - Fixed incorrect indentation Signed-off-by: Håkon Bugge Reviewed-by: Knut Omang --- net/rds/send.c | 10 +++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/net/rds/send.c b/net/rds/send.c index 058a407..b52cdc8 100644 --- a/net/rds/send.c +++ b/net/rds/send.c @@ -428,14 +428,18 @@ int rds_send_xmit(struct rds_conn_path *cp) * some work and we will skip our goto */ if (ret == 0) { + bool raced; + smp_mb(); + raced = send_gen != READ_ONCE(cp->cp_send_gen); + if ((test_bit(0, &conn->c_map_queued) || -!list_empty(&cp->cp_send_queue)) && - send_gen == READ_ONCE(cp->cp_send_gen)) { - rds_stats_inc(s_send_lock_queue_raced); + !list_empty(&cp->cp_send_queue)) && !raced) { if (batch_count < send_batch_count) goto restart; queue_delayed_work(rds_wq, &cp->cp_send_w, 1); + } else if (raced) { + rds_stats_inc(s_send_lock_queue_raced); } } out: -- 2.9.3