Sorry for the top-post, Outlook isn't being very kind on its quoting: For the mutexes, it seems that choice #1 has already been made for a good amount of the stack.
That said, rather than writing our own, I'd prefer to propose #6: write/expose a mutex library, use the standard ABI, and implement via std::mutex. This seems like the least amount of work for cross-platformability, and will compile just about anywhere. -----Original Message----- From: iotivity-dev-bounces at lists.iotivity.org [mailto:[email protected]] On Behalf Of Thiago Macieira Sent: Tuesday, February 24, 2015 10:32 AM To: iotivity-dev at lists.iotivity.org Subject: Re: [dev] Thread-safety solution for the ITVTContext On Tuesday 24 February 2015 09:49:59 Keane, Erich wrote: > On Tue, 2015-02-24 at 09:34 -0800, Thiago Macieira wrote: > > > 1) the context is reference-counted, so we will provide functions like: > > > > * ITVTContext_context_create() (ref counts up) > > * ITVTContext_context_unref() > > * ITVTContext_context_ref(ITVTContext *) > > > > Is the ref-counting overkill here? I would suspect that just allowing > the consumer to control the lifetime would be a simpler implementation > that gets us just about everything. I don't think so. I think it's required because of the queue, especially if we decide to provide a "default ITVTContext". I did not include that in the description but it was part of my design. If that's part of our goals, then two quite independent users of the context cannot know the lifetime of the context. However, if a default & global context is not part of our goals, then I agree we can leave the lifetime management to the user. That being the case, if the lifetime of the context ends, so does the queue and we won't need a refcount. The problem will be to handle this scenario: Thread 1 (user's own thread) decides to shut down the context Thread 2 (socket polling thread) is parsing an incoming message and trying to determine what call back to invoke Thread 3 (callback thread) is out in user code and has at least one more callback queued My first thought is that if the user is responsible for the lifetime, the user will need to use its own locking mechanism to do that. That implies the user needs to acquire and release the lock in the callbacks. If that's the case, then the callback may be trying to acquire the lock while thread 1 holds it for destroying the context. At the same time, in order to destroy the context, ITVTContext needs to be sure that no callbacks are currently running, so it needs to wait for thread 3 to return from the callback. Deadlock. We could solve this by relieving the user from using a mutex around its object. Instead, we have a structure inside the context that counts how many callbacks are currently running. If a shutdown was requested, then no new callbacks are started and Thread 3 (or its pool) exits when the flag is raised. This structure is a simple boolean inside ITVTContext followed by waking up all the threads, followed by thread 1 doing pthread_join on all the other threads. > > 4) the main ITVTContext API needs to be protected by a mutex on systems > > that are thread-capable. On Linux, pthread_mutex is extremely > > lightweight. If need be, I can write code to use futex(2) directly, at > > the expense of not being able to use wait conditions. > > We are already (for better or worse) using glib2's mutex library, I > would suggest selecting 1 mutex for the entire C stack, and would prefer > something more cross-platform than pthread_mutex. Unfortunately, no such thing. Alternatives: 1) GMutex - dependency on glib 2) C++11 std::mutex - not acceptable in the C lib 3) C11 mtx_t - ha! C11! Even less supported than C++11. 4) pthread_mutex - good on Unix, horrible with MinGW / Cygwin, not available with MSVC 5) pthread_mutex & Windows equivalent I'd say our choice is #5 and roll out our own wrappers. -- Thiago Macieira - thiago.macieira (AT) intel.com Software Architect - Intel Open Source Technology Center _______________________________________________ iotivity-dev mailing list iotivity-dev at lists.iotivity.org https://lists.iotivity.org/mailman/listinfo/iotivity-dev
