[jira] [Created] (PROTON-1167) Qpid-proton: SIGSEGV crash when a queue becomes full

2016-03-29 Thread Graham Leggett (JIRA)
Graham Leggett created PROTON-1167:
--

 Summary: Qpid-proton: SIGSEGV crash when a queue becomes full
 Key: PROTON-1167
 URL: https://issues.apache.org/jira/browse/PROTON-1167
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-c
Affects Versions: 0.12.0
 Environment: CentOS7 (latest)
qpid-proton-c-0.12.0-1.el7.x86_64

Reporter: Graham Leggett


When qpid is asked to create a default queue as follows:

{code}
qpid-config add queue foo
{code}

And if an attempt is made to fill this queue to overflow with 1MB messages 
until we run out of space, qpid crashes as follows:

{code}
2016-03-29 22:18:59 [Network] debug qpid.127.0.0.1:5672-127.0.0.1:43002 decoded 
65536 bytes from 65536
2016-03-29 22:18:59 [Network] debug qpid.127.0.0.1:5672-127.0.0.1:43002 decoded 
1016 bytes from 1016
2016-03-29 22:18:59 [Broker] debug received delivery: 
\xE4\x03\x00\x00\x00\x00\x00\x00
2016-03-29 22:18:59 [Broker] debug Message received: 1049552 bytes
2016-03-29 22:18:59 [System] debug Exception constructed: Maximum depth 
exceeded on foo: current=[count: 125, size: 103905496], max=[size: 104857600] 
(/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/broker/Queue.cpp:1633)
2016-03-29 22:18:59 [Network] debug qpid.127.0.0.1:5672-127.0.0.1:43002 encoded 
249 bytes from 65536
2016-03-29 22:18:59 [Network] debug qpid.127.0.0.1:5672-127.0.0.1:43002 decoded 
51 bytes from 51
2016-03-29 22:18:59 [Broker] debug received delivery: 
\xE4\x03\x00\x00\x00\x00\x00\x00
2016-03-29 22:18:59 [Broker] debug Message received: 0 bytes
2016-03-29 22:18:59 [Broker] debug clean(): 125 messages remain; head is now 0
2016-03-29 22:18:59 [Broker] debug Message 0x69b2e0 published, state is 1 (head 
is now 0)
2016-03-29 22:18:59 [Broker] debug Message 126 enqueued on foo

Program received signal SIGSEGV, Segmentation fault.
pni_process_tpwork_receiver (settle=, delivery=0x698550, 
transport=0x7fffec01c710)
at /usr/src/debug/qpid-proton-0.12.0/proton-c/src/transport/transport.c:2147
2147  if ((int16_t) ssn->state.local_channel >= 0 && 
!delivery->remote.settled && delivery->state.init) {
Missing separate debuginfos, use: debuginfo-install 
boost-program-options-1.53.0-25.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 
krb5-libs-1.13.2-10.el7.x86_64 libaio-0.3.109-13.el7.x86_64 
libcom_err-1.42.9-7.el7.x86_64 libdb4-cxx-4.8.30-13.el7.x86_64 
libselinux-2.2.2-6.el7.x86_64 libuuid-2.23.2-26.el7.x86_64 
nss-softokn-freebl-3.16.2.3-13.el7_1.x86_64 pcre-8.32-15.el7.x86_64 
xz-libs-5.1.2-12alpha.el7.x86_64 zlib-1.2.7-15.el7.x86_64
(gdb) bt
#0  pni_process_tpwork_receiver (settle=, delivery=0x698550, 
transport=0x7fffec01c710)
at /usr/src/debug/qpid-proton-0.12.0/proton-c/src/transport/transport.c:2147
#1  pni_process_tpwork (transport=transport@entry=0x7fffec01c710, 
endpoint=)
at /usr/src/debug/qpid-proton-0.12.0/proton-c/src/transport/transport.c:2181
#2  0x73a898c1 in pni_process_tpwork (endpoint=, 
transport=0x7fffec01c710)
at /usr/src/debug/qpid-proton-0.12.0/proton-c/src/transport/transport.c:2164
#3  pni_phase (phase=, transport=0x7fffec01c710)
at /usr/src/debug/qpid-proton-0.12.0/proton-c/src/transport/transport.c:2381
#4  pni_process (transport=) at 
/usr/src/debug/qpid-proton-0.12.0/proton-c/src/transport/transport.c:2399
#5  pn_output_write_amqp (transport=, layer=, 
bytes=0x7fffec00bf80 "", available=16384)
at /usr/src/debug/qpid-proton-0.12.0/proton-c/src/transport/transport.c:2550
#6  0x73a8aacc in transport_produce 
(transport=transport@entry=0x7fffec01c710)
at /usr/src/debug/qpid-proton-0.12.0/proton-c/src/transport/transport.c:2603
#7  pn_transport_pending (transport=transport@entry=0x7fffec01c710)
at /usr/src/debug/qpid-proton-0.12.0/proton-c/src/transport/transport.c:2882
#8  0x73a8acd7 in pn_transport_output (transport=0x7fffec01c710, 
bytes=0x7fffec02f280 "", size=65536)
at /usr/src/debug/qpid-proton-0.12.0/proton-c/src/transport/transport.c:2630
#9  0x73d046ee in qpid::broker::amqp::Connection::encode 
(this=0x7fffec007780, buffer=0x7fffec02f280 "", size=65536)
at /usr/src/debug/qpid-cpp-0.34/src/qpid/broker/amqp/Connection.cpp:233
#10 0x7749b3c4 in qpid::sys::AsynchIOHandler::idle (this=0x7fffec01ca30)
at /usr/src/debug/qpid-cpp-0.34/src/qpid/sys/AsynchIOHandler.cpp:221
#11 0x774125a6 in operator() (a0=..., this=0x7fffec000d78) at 
/usr/include/boost/function/function_template.hpp:767
#12 qpid::sys::posix::AsynchIO::writeable (this=0x7fffec000b80, h=...)
at /usr/src/debug/qpid-cpp-0.34/src/qpid/sys/posix/AsynchIO.cpp:582
#13 0x7749dce1 in operator() (a0=..., this=) at 
/usr/include/boost/function/function_template.hpp:767
#14 qpid::sys::DispatchHandle::processEvent (this=0x7fffec000b88, 
type=qpid::sys::Poller::WRITABLE)
at /usr/src/debug/qpid-cpp-0.34/src/qpid/sys/DispatchHandle.cpp:283
#15 

Re: Creating a sender outside of a container's thread

2016-03-29 Thread Mark Banner
Thanks Alan.

On Thu, Mar 24, 2016 at 11:02 PM, Alan Conway  wrote:

> On Fri, 2016-03-18 at 16:01 +0100, Mark Banner wrote:
> > I have gone through the documentation and code and have the following
> > understanding of container vs connection_engine:
> >
> > Container is an adapter for pn_reactor. The proton reactor manages
> > network
> > IO and sends events to a registered handler so container registers an
> > proxy
> > handler which will pass events to the user's implementation.
> >
> > Connection_engine passes IO to a pn_transport (through io_read and
> > io_write) and receives events through a pn_collector. Dispatching is
> > then
> > done directly to the user handler.
> >
> > From my understanding connection_engine looks more efficient than the
> > container for the same functionality (if socket_engine is used). It
> > also
> > offers more control since a custom implementation can modify the
> > event loop
> > and mock IO for testing. This makes it a better fit for me compared
> > to the
> > proposed "application event injection" which is proposed in the
> > thread on
> > qpid-users.
> > Am I missing something which is managed by the reactor or the
> > container?
>
> That's an excellent summary.
>
> >
> > Also, the release notes for proton 12 say that the API for container
> > is
> > stable but that connection_engine is still unstable. Will this
> > interface
> > still have the same role in future versions (ie converting IO to
> > proton
> > events) ?
>
> Yes. The connection_engine separates just the AMQP features of proton
> without entangling them in any assumptions about IO or threads. It will
> stay as an integration point for users that have special IO/threading
> needs.
>
> The plan is then to layer more convenience on top of the engine for common
> threading and IO use-cases, so users that are happy with a generic
> implementation don't have to start from scratch.
>
> Injecting functions will be a part of the ease-of-use layer for the
> connection engine, but injecting into a specific connection context rather
> than a container of many connections.
>
> These are not untested ideas, they follow from patterns in the qpidd C++
> broker, the qpid dispatch engine and the proton Go binding.
>
> Cheers,
> Alan.
>
>
>
>