Peter Dimov wrote: >The overhead is usually acceptable, even with a plain pthread_mutex.
Agreed. With that in mind, why wasn't the shared_ptr *also* protected by a mutex? >> Thereby making the simultaneous read/write safe (safe in that no >> memory would be leaked b/c the pointers written/read). > >Consider this: > >// thread A > >p->f(); > >// thread B > >p.reset(); As you pointed out, there's no way for that to be thread safe with the shared_ptr, but it's probably not possible to make it safe with *any* type of pointer. That's not what I was trying to ask. You *could* make all reasonable operations on pointers thread safe with a mutex. (where reasonable = all the examples shown on the boost::shared_ptr documentation). Why wasn't that done? i.e. if the reference count for a shared pointer is thread safe in simultaneous write, why not make the pointer value thread safe as well? from boost docs: ,---------------- | boost::shared_ptr p, p3; | | // thread A | p = p3; // reads p3, writes p | | // thread B | p3.reset(); // writes p3, simultaneous read/write undefined `---------------- you've already got the mutex "penalty" in the counter. why not pull it out of there, put it in the shared pointer, and then you've got thread safe pointers. right? i understand that there's no guarantee on what p will point to, but we know it's not going to point to garbage - but it *will* be p3 or 0. >> Which brings me to a third question. >> Let's say I've got a class (e.g. shared_count) that I sometimes want >> to be protected by a mutex, and sometimes not. Say, for instance, I >> want it to be protected by default, but sometimes I want to avoid that >> overhead b/c I know I'll be using it in an environment that's already >> protected by a mutex (and can avoid the extra mutex overhead). > >Can you demonstrate this with an example? I may have this a little wrong, but here's my shot at an example: today ----- template <typename T> class shared_ptr; (which uses) class counted_base; tomorrow -------- class counted_base<class LockingStrategy = MutexLockingStrategy>; template <typename T, class LockingStrategy = MutexLockingStrategy> class shared_ptr; (which specializes) counted_base<NullLockingStrategy> (where MutexLockingStrategy and NullLockingStrategy are defined in the previous email) So, you use a template for the counted_base class to define the type of thread locking primitives you want to use. Today's counted_base == tomorrow's counted_base<MutexLockingSTrategy> in terms of functionality. But, the flexibility allows the shared_ptr to avoid the mutex inside the shared_ptr; I started thinking about this with my own work. I've got a query-tree class (2-D binary tree) that sometimes needs to be thread-safe, and sometimes not - in the same executable. e.g. I want to be able to do: QT unsafe_tree; QT<MutexLockingStragegy> thread_safe_tree; and this applies to all sorts of containers/classes. >> Just wondering if the above strategy could be used in the case of >> shared_count, and if not, why not? > >Because of the extra LockingStrategy parameter. :-) Yeah, but is there a technical reason for not doing something like the above? I'm wondering what the downside would be? The upsides I can see are: 1) pulls mutex details outside of the class 2) gets rid of ugly '#ifdef BOOST_HAS_THREADS' lines 3) allows different specializations of the (counted_base) class in the same binary Obviously, the MutexLockingStrategy class would still have the ugly #ifdef lines in it, but that's it. TJ -- Trey Jackson [EMAIL PROTECTED] "Instant gratification takes too long." -- Carrie Fisher _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost