About the title: could you change it to "Fix: atomic_add_unless() returns true/false rather than prior value" ?
And describe the discrepancy between the existing implementation vs kernel interface, and its effect ? Basically, even though the existing implementation aimed at preventing kref overflow (and thus eventual use-after-free), the check never triggers due to this issue. Since lttng_kref_get is used only for the metadata cache "open" and the lib_ring_buffer_open_read operations, we overall number of references end up being limited by the number of file descriptors that can be concurrently open on the entire OS, which is limited by the value in /proc/sys/fs/file-max . So we can say that those kref overflow protections as used here in lttng-modules are redundant with the bound given by the maximum number of file descriptors, so it's not a security issue per se, but it appears to be safer to have an overflow-handling kref_get nevertheless. Thanks, Mathieu ----- On Mar 7, 2017, at 11:36 PM, Francis Deslauriers francis.deslauri...@efficios.com wrote: > Signed-off-by: Francis Deslauriers <francis.deslauri...@efficios.com> > --- > wrapper/kref.h | 6 +----- > 1 file changed, 1 insertion(+), 5 deletions(-) > > diff --git a/wrapper/kref.h b/wrapper/kref.h > index eedefbf..f30a9ae 100644 > --- a/wrapper/kref.h > +++ b/wrapper/kref.h > @@ -36,11 +36,7 @@ > */ > static inline int lttng_kref_get(struct kref *kref) > { > - if (atomic_add_unless(&kref->refcount, 1, INT_MAX) != INT_MAX) { > - return 1; > - } else { > - return 0; > - } > + return atomic_add_unless(&kref->refcount, 1, INT_MAX); > } > > #endif /* _LTTNG_WRAPPER_KREF_H */ > -- > 2.7.4 -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com _______________________________________________ lttng-dev mailing list lttng-dev@lists.lttng.org https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev