On Wednesday, 20 November 2024 18:47:17 CET Remi Pommarel wrote:
> The tt.local_changes atomic is either written with tt.changes_list_lock
> or close to it (see batadv_tt_local_event()). Thus the performance gain
> using an atomic was limited (or because of atomic_read() impact even
> negative). Using atomic also comes with the need to be wary of potential
> negative tt.local_changes value.
> 
> Simplify the tt.local_changes usage by removing the atomic property and
> modifying it only with tt.changes_list_lock held.

The overall change assumes that the compiler never splits writes (store 
tearing) [1]. Of course, writes are protected against each other using locks. 
But the reader is no longer protected from partial writes. I haven't checked 
whether store fusing might be a problem.

Kind regards,
        Sven

[1] 
https://www.kernel.org/doc/html/latest/core-api/wrappers/memory-barriers.html

[...]
> @@ -783,13 +783,13 @@ static int batadv_softif_init_late(struct net_device 
> *dev)
>       atomic_set(&bat_priv->mesh_state, BATADV_MESH_INACTIVE);
>       atomic_set(&bat_priv->bcast_seqno, 1);
>       atomic_set(&bat_priv->tt.vn, 0);
> -     atomic_set(&bat_priv->tt.local_changes, 0);
>       atomic_set(&bat_priv->tt.ogm_append_cnt, 0);
>  #ifdef CONFIG_BATMAN_ADV_BLA
>       atomic_set(&bat_priv->bla.num_requests, 0);
>  #endif
>       atomic_set(&bat_priv->tp_num, 0);
>  
> +     bat_priv->tt.local_changes = 0;

Would need WRITE_ONCE (just to be consistent)

[...]
> @@ -508,21 +507,17 @@ static void batadv_tt_local_event(struct batadv_priv 
> *bat_priv,
>  del:
>               list_del(&entry->list);
>               kmem_cache_free(batadv_tt_change_cache, entry);
> +             bat_priv->tt.local_changes--;
>               kmem_cache_free(batadv_tt_change_cache, tt_change_node);
> -             event_removed = true;
>               goto unlock;
>       }
>  
>       /* track the change in the OGMinterval list */
>       list_add_tail(&tt_change_node->list, &bat_priv->tt.changes_list);
> +     bat_priv->tt.local_changes++;

Needs more complex constructs with WRITE_ONCE or 
__sync_add_and_fetch/__sync_sub_and_fetch (which were handled before inside 
atomic_inc). The latter are not used that often in the kernel, so I wouldn't
want to introduce them in the batman-adv module.

> @@ -1022,7 +1017,7 @@ static void batadv_tt_tvlv_container_update(struct 
> batadv_priv *bat_priv)
>       tt_data->flags = BATADV_TT_OGM_DIFF;
>  
>       spin_lock_bh(&bat_priv->tt.changes_list_lock);
> -     atomic_set(&bat_priv->tt.local_changes, 0);
> +     bat_priv->tt.local_changes = 0;
>  
>       list_for_each_entry_safe(entry, safe, &bat_priv->tt.changes_list,
>                                list) {

Would need WRITE_ONCE

> @@ -1438,7 +1433,7 @@ static void batadv_tt_changes_list_free(struct 
> batadv_priv *bat_priv)
>               kmem_cache_free(batadv_tt_change_cache, entry);
>       }
>  
> -     atomic_set(&bat_priv->tt.local_changes, 0);
> +     bat_priv->tt.local_changes = 0;
>       spin_unlock_bh(&bat_priv->tt.changes_list_lock);
>  }

Would need WRITE_ONCE 

Kind regards,
        Sven

Attachment: signature.asc
Description: This is a digitally signed message part.

Reply via email to