On Wed, 2017-03-15 at 22:32 +0000, Adel Boutros wrote:
> Hello,
> 
> As Alan said, we have for example "abused" the schedule in C++. Today
> we run the container on another thread and we have a shared queue
> between the container threads and the other threads. We call schedule
> very frequently with a delay of 0. Whenever the container thread
> wakes, it checks the queue to send events then calls schedule again.
> The access to the queue is protected by locks.
> 
> The downside is that the container thread abuses a full CPU but we
> can limit this by using a delay of higher than 0 on schedule.
> 
> Regards,
> Adel
> 

And to be clear this is abuse to hack out of a short-term hole - we
will have a properly behaved solution soon.

> ________________________________
> From: Alan Conway <[email protected]>
> Sent: Wednesday, March 15, 2017 10:51:24 PM
> To: [email protected]
> Subject: Re: [qpid-proton-cpp] default_container does not implement
> thread safety at all?
> 
> On Tue, 2017-03-14 at 01:23 +0200, Fj wrote:
> > Hello, I'm mightily confused.
> > 
> > There's
> > https://qpid.apache.org/releases/qpid-proton-0.17.0/proton/cpp/api/
> > md
> > _mt.html
> > that says in no uncertain terms that I can use event_loop.inject()
> > to
> > execute a callback on the thread that runs the event loop. There's
> > an
> > examples/cpp/mt/broker.cpp linked from the above that's supposed to
> > be a
> > thread safe implementation of a thread pool running a bunch of
> > containers.
> > 
> > But after "grep inject" and carefully sifting through the results,
> > it
> > seems
> > that after some indirection it all ends up calling
> > 
> > // FIXME aconway 2016-06-07: this is not thread safe. It is
> > sufficient for
> > using
> > // default_container::schedule() inside a handler but not for
> > inject() from
> > // another thread.
> > bool event_loop::impl::inject(void_function0& f) {
> >     try { f(); } catch(...) {}
> >     return true;
> > }
> > 
> > What the hell? Blank exception suppression is just the icing on the
> > cake,
> > nothing here even resembles doing the actual hard thing: telling
> > the
> > reactor to inject a new custom event in a threadsafe manner.
> 
> Guilty as charged mlud. This was work in progress when we started to
> redo the guts in a more profound way. The Real Thing is coming - it
> is
> available in C now as pn_proactor and the C++ binding is being
> refactored on top of it.
> 
> > Am I missing something obvious, or is the documentation there, and
> > in
> > other
> > places, and the example, chemically pure wishful thinking, like, it
> > would
> > be pretty nice if we supported multithreading, and it would work in
> > such
> > and such ways then, but unfortunately we don't, as a matter of
> > fact?
> 
> The state of the release is probably not clearly documented: the C++
> MT
> interfaces *allow* you to build an MT app that replaces the
> proton::container (as demoed in the mt_broker) BUT the
> proton::container itself is still not multithreaded. It has a
> compatible API because it will be in future.
> 
> > If so, maybe you gals and guys should fix the documentation, and
> > not
> > just
> > with "experimental" but with "DOESN'T WORK AT ALL" prominent on the
> > page?
> 
> I agree the doc is not clear, next release it should work as expected
> an not as obscurely caveatted.
> 
> > On more constructive notes:
> > 
> > 1) do people maybe use some custom containers, similar to the
> > epoll_container in the examples (which doesn't support schedule()
> > though, I
> > have to point out)? If so, I much desire to see one.
> 
> Yes, but the plan is to fix the non-custom container to be fully
> thread-safe so you don't have to unless you have special needs. You
> can
> also implement directly in C if you have very special needs.
> 
> > 2) why do you attempt to bolt on your event_loop thing to
> > Connection
> > and
> > below when the only thing that has the run() method is Container,
> > therefore
> > that's the scope that any worker thread would have? It's the
> > Container that
> > should have the inject() method. Underlying C API notwithstanding.
> 
> Nope: the threading model is seralized-per-connection, so functions
> that operate only on connection state and are triggered from an event
> handler don't need to be locked. Only interactions that cross
> connections (e.g. one connection queues something a subscriber on
> another connection needs to wake up) need sync. This is demonstrated
> with the mt_broker stuff. That's why injecting is per-connection.
> 
> > 3) What's the best way forward if I really have to have a
> > threadsafe
> > container that I can inject some event into threadsafely:
> 
> The example demonstrates how you can do it, but is incomplete as you
> point out. Soon it will be built in.
> 
> >   3a) does the C API support thread safety? Like, the whole problem
> > is
> > touching the reactor's job queue or whatever it has with taking an
> > appropriate lock, and it must take that same lock itself for it to
> > work.
> 
> It does now, see the pn_proactor, there are examples. The C reactor
> is
> fundamentally and forever thread-unsafe and horribly broken in all
> ways
> with respect to concurrency. We started to introduce MT at the C++
> layer and work around the reactor - proton::container lagged because
> it
> is so tied to the reactor.
> 
> Finally we decided to replace the pn_reactor entirely with the
> pn_proactor in C - this is designed to be concurrent and provide for
> replaceable IO. At this point the proactor exists and has initial
> implementations (libuv done, native iocp + epoll nearly so) but we
> are
> still working on replacing the reactor. It's been a big job.
> 
> > 
> >   3b) I checked out the Python API, they use a mock EventInjector
> > pollable
> > thingie instead of reaching inside the reactor and telling it add
> > an
> > event.
> 
> Yep, the python API is due for an overhaul also but works for now as
> far as python threads are concerned. Ruby likewise.
> 
> >   3c) epoll example does yet another thing apparently (though I'm
> > not
> > sure
> > it works because it's not covered by any tests): just tell the
> > reactor to
> > wake up and process the queued job list in your handler.
> 
> The reactor is a dead-end threading wise, hence the kerfuffle.
> 
> Let us know more about what you are trying to do and we can see if
> early access to anything in progress might help you, or if we can
> find
> you some workarounds based on the existing release. Others have
> worked
> around this in the existing container by abusing schedule() and using
> some application level locks.
> 
> 
> 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to