From: Sven Eckelmann <s...@narfation.org>

It is hard to understand why the refcnt is increased when it isn't done
near the actual place the new reference is used. So using kref_get right
before the place which requires the reference and in the same function
helps to avoid accidental problems caused by incorrect reference counting.

Signed-off-by: Sven Eckelmann <s...@narfation.org>
Signed-off-by: Marek Lindner <mareklind...@neomailbox.ch>
Signed-off-by: Simon Wunderlich <s...@simonwunderlich.de>
---
 net/batman-adv/bridge_loop_avoidance.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/net/batman-adv/bridge_loop_avoidance.c 
b/net/batman-adv/bridge_loop_avoidance.c
index b0517a0..1db3c12 100644
--- a/net/batman-adv/bridge_loop_avoidance.c
+++ b/net/batman-adv/bridge_loop_avoidance.c
@@ -526,11 +526,9 @@ batadv_bla_get_backbone_gw(struct batadv_priv *bat_priv, 
u8 *orig,
        atomic_set(&entry->wait_periods, 0);
        ether_addr_copy(entry->orig, orig);
        INIT_WORK(&entry->report_work, batadv_bla_loopdetect_report);
-
-       /* one for the hash, one for returning */
        kref_init(&entry->refcount);
-       kref_get(&entry->refcount);
 
+       kref_get(&entry->refcount);
        hash_added = batadv_hash_add(bat_priv->bla.backbone_hash,
                                     batadv_compare_backbone_gw,
                                     batadv_choose_backbone_gw, entry,
-- 
2.9.3

Reply via email to