RE: [PATCH 2/4] mac80211/cfg: mesh: fix healing time when a mesh peer is disconnecting

2016-06-28 Thread Machani, Yaniv
On Tue, Jun 28, 2016 at 15:27:47, Bob Copeland wrote:
> linux- wirel...@vger.kernel.org; net...@vger.kernel.org; Hahn, Maital
> Subject: Re: [PATCH 2/4] mac80211/cfg: mesh: fix healing time when a 
> mesh peer is disconnecting
> 
> On Tue, Jun 28, 2016 at 02:13:05PM +0300, Yaniv Machani wrote:
> > From: Maital Hahn <mait...@ti.com>
> >
> > Once receiving a CLOSE action frame from the disconnecting peer, 
> > flush all entries in the path table which has this peer as the next hop.
> 
> Please address the user-visible behavior in your commit messages.
> Does it crash?  Does it send frames to an invalid peer?  Do frames get 
> dropped?
> 

Hi Bob,
It was a crash, apparently already fixed by your patches some time ago.
I'll remove that part and resend the 2nd part (with some more 'why', and less 
typos..)

> > In addition, upon receiving a packet, if next hop is not found, 
> > trigger PERQ immidiatly, instead of just putting it in the queue.
> 
> "PREQ"
> 
> Please split this into a separate patch that we can review separately 
> (and also give the "why" in the commit log).
> 
> > @@ -1011,6 +1011,7 @@ static void sta_apply_mesh_params(struct
> ieee80211_local *local,
> > if (sta->mesh->plink_state == NL80211_PLINK_ESTAB)
> > changed =
> mesh_plink_dec_estab_count(sdata);
> > sta->mesh->plink_state = params->plink_state;
> > +   mesh_path_flush_by_nexthop(sta);
> 
> This isn't necessary, caller should already be doing
> mesh_path_flush_by_nexthop() in every case I could see.  Besides it 
> cannot be done under plink lock.
> 

I believe this was fixed in your patch "mac80211: mesh: flush paths outside of 
plink lock"
There is probably no need in that on the latest as well.

Thanks,
Yaniv
 


--
To unsubscribe from this list: send the line "unsubscribe linux-wireless" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/4] mac80211/cfg: mesh: fix healing time when a mesh peer is disconnecting

2016-06-28 Thread Bob Copeland
On Tue, Jun 28, 2016 at 02:13:05PM +0300, Yaniv Machani wrote:
> From: Maital Hahn 
> 
> Once receiving a CLOSE action frame from the disconnecting peer,
> flush all entries in the path table which has this peer as the
> next hop.

Please address the user-visible behavior in your commit messages.
Does it crash?  Does it send frames to an invalid peer?  Do
frames get dropped?

> In addition, upon receiving a packet, if next hop is not found,
> trigger PERQ immidiatly, instead of just putting it in the queue.

"PREQ"

Please split this into a separate patch that we can review
separately (and also give the "why" in the commit log).

> Signed-off-by: Maital Hahn 
> Acked-by: Yaniv Machani 
> ---
>  net/mac80211/cfg.c   |  1 +
>  net/mac80211/mesh.c  |  3 ++-
>  net/mac80211/mesh_hwmp.c | 42 +-
>  3 files changed, 28 insertions(+), 18 deletions(-)
> 
> diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
> index 0c12e40..f876ef7 100644
> --- a/net/mac80211/cfg.c
> +++ b/net/mac80211/cfg.c
> @@ -1011,6 +1011,7 @@ static void sta_apply_mesh_params(struct 
> ieee80211_local *local,
>   if (sta->mesh->plink_state == NL80211_PLINK_ESTAB)
>   changed = mesh_plink_dec_estab_count(sdata);
>   sta->mesh->plink_state = params->plink_state;
> + mesh_path_flush_by_nexthop(sta);

This isn't necessary, caller should already be doing
mesh_path_flush_by_nexthop() in every case I could see.  Besides it
cannot be done under plink lock.

> +++ b/net/mac80211/mesh.c
> @@ -159,7 +159,8 @@ void mesh_sta_cleanup(struct sta_info *sta)
>   if (!sdata->u.mesh.user_mpm) {
>   changed |= mesh_plink_deactivate(sta);
>   del_timer_sync(>mesh->plink_timer);
> - }
> + } else
> + mesh_path_flush_by_nexthop(sta);

And this is already fixed in mac80211-next.

-- 
Bob Copeland %% http://bobcopeland.com/
--
To unsubscribe from this list: send the line "unsubscribe linux-wireless" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html