[Note: My former email address is going away eventually.  I am moving the
conversation to my other email address which is a bit more permanent.]

On Mon, 2017-09-04 at 15:27 +0100, Radu Nicolau wrote:
> 
> On 8/7/2017 5:11 PM, Charles (Chas) Williams wrote:
> > After commit 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool") is it
> > much harder to detect a "double free".  If the developer makes a copy
> > of an mbuf pointer and frees it twice, this condition is never detected
> > and the mbuf gets returned to the pool twice.
> >
> > Since this requires extra work to track, make this behavior conditional
> > on CONFIG_RTE_LIBRTE_MBUF_DEBUG.
> >
> > Signed-off-by: Chas Williams <ciwil...@brocade.com>
> > ---
> >
> > @@ -1304,10 +1329,13 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> >                     m->next = NULL;
> >                     m->nb_segs = 1;
> >             }
> > +#ifdef RTE_LIBRTE_MBUF_DEBUG
> > +           rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
> > +#endif
> >   
> >             return m;
> >   
> > -       } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) == 0) {
> > +   } else if (rte_mbuf_refcnt_update(m, -1) == 0) {
> Why replace the use of atomic operation?

It doesn't.  rte_mbuf_refcnt_update() is also atomic(ish) but it slightly more
optimal.  This whole section is a little hazy actually.  It looks like 
rte_pktmbuf_prefree_seg() unwraps rte_mbuf_refcnt_update() so they can avoid
setting the refcnt when the refcnt is already the 'correct' value.

> >   
> >   
> >             if (RTE_MBUF_INDIRECT(m))
> > @@ -1317,7 +1345,7 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> >                     m->next = NULL;
> >                     m->nb_segs = 1;
> >             }
> > -           rte_mbuf_refcnt_set(m, 1);
> > +           rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT);
> >   
> >             return m;
> >     }
> Reviewed-by:  Radu Nicolau <radu.nico...@intel.com>

Thanks for the review.

Reply via email to