Proton-J engine and thread safety

2015-06-10 Thread Kritikos, Alex
Hi all,

is the proton-j engine meant to be thread safe?
We have been experiencing some sporadic issues where under load, the engine 
sends callbacks to registered handlers with null as the event. We do not have a 
standalone repro case yet but just wondered what other people’s experience is 
as well as what are the recommendations around thread safety.

Thanks,

Alex Kritikos
Software AG
This communication contains information which is confidential and may also be 
privileged. It is for the exclusive use of the intended recipient(s). If you 
are not the intended recipient(s), please note that any distribution, copying, 
or use of this communication or the information in it, is strictly prohibited. 
If you have received this communication in error please notify us by e-mail and 
then delete the e-mail and any copies of it.
Software AG (UK) Limited Registered in England  Wales 1310740 - 
http://www.softwareag.com/uk



Re: Strange behaviour for pn_messenger_send on CentOS 6

2015-06-10 Thread Darryl L. Pierce
On Tue, Jun 09, 2015 at 10:41:51PM +0100, Frank Quinn wrote:
 Just tried this and can recreate on a 64 bit laptop running CentOS 6.6
 natively too. The send application never exits if a receiver is not yet
 ready. Can anyone else see this or am I going mad?

Are these bits you've built or did you install them from RPM (not sure
if you said before, so please remind me)? And are you running a Qpid or
other broker? I ask because it seems you're getting a connection (no
connection refused error) but something else is causing the failure,
the SASL header mismatch.

How is your SASL configuration setup on the broker to which you're
connecting?

-- 
Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc.
Delivering value year after year.
Red Hat ranks #1 in value among software vendors.
http://www.redhat.com/promo/vendor/



pgpjlOinpZ7_p.pgp
Description: PGP signature


Re: Strange behaviour for pn_messenger_send on CentOS 6

2015-06-10 Thread Frank Quinn
You can recreate this on a CentOS 6.6 box by installing qpid-proton-c-devel
using yum from EPEL, then compiling the example application that comes with
it in /usr/share/proton/examples/messenger/send.c.

There is no broker with these example applications - it's point to point.

There is a matching recv.c application there too. If you start recv first,
it works fine, but if you don't, the send application hangs which I believe
is new behaviour.

Again, this does not happen on my Fedora laptop - only on CentOS 6.

Cheers,
Frank

On Wed, Jun 10, 2015 at 1:34 PM, Darryl L. Pierce dpie...@redhat.com
wrote:

 On Tue, Jun 09, 2015 at 10:41:51PM +0100, Frank Quinn wrote:
  Just tried this and can recreate on a 64 bit laptop running CentOS 6.6
  natively too. The send application never exits if a receiver is not yet
  ready. Can anyone else see this or am I going mad?

 Are these bits you've built or did you install them from RPM (not sure
 if you said before, so please remind me)? And are you running a Qpid or
 other broker? I ask because it seems you're getting a connection (no
 connection refused error) but something else is causing the failure,
 the SASL header mismatch.

 How is your SASL configuration setup on the broker to which you're
 connecting?

 --
 Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc.
 Delivering value year after year.
 Red Hat ranks #1 in value among software vendors.
 http://www.redhat.com/promo/vendor/




[jira] [Updated] (PROTON-905) Long-lived connections leak sessions and links

2015-06-10 Thread Ken Giusti (JIRA)

 [ 
https://issues.apache.org/jira/browse/PROTON-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ken Giusti updated PROTON-905:
--
Attachment: test-send.py

A test pyngus script that can reproduce the problem when run against the 0.30 
qpidd broker.

Remember to run 'drain -f amq.topic' in the background to prevent message build 
up.

 Long-lived connections leak sessions and links
 --

 Key: PROTON-905
 URL: https://issues.apache.org/jira/browse/PROTON-905
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-c
Affects Versions: 0.9.1
Reporter: Ken Giusti
Priority: Blocker
 Fix For: 0.10

 Attachments: test-send.py


 I found this issue while debugging a crash dump of qpidd.
 Long lived connections do not free its sessions/link.
 This only applies when NOT using the event model.  The version of qpidd I 
 tested against (0.30) still uses the iterative model.  Point to consider, I 
 don't know why this is the case.
 Details:  I have a test script that opens a single connection, then 
 continually creates sessions/links over that connection, sending one message 
 before closing and freeing the sessions/links.  See attached.
 Over time the qpidd run time consumes all memory on the system and is killed 
 by OOM.  To be clear, I'm using drain to remove all sent messages - there is 
 no message build up.
 On debugging this, I'm finding thousands of session objects on the 
 connections free sessions weakref list.  Every one of those sessions has a 
 refcount of one.
 Once the connection is finalized, all session objects are freed.  But until 
 then, freed sessions continue to accumulate indefinitely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Proton-c Null Messages

2015-06-10 Thread logty
Ah, okay, you are right, I checked out the source code and it does do that
automatically. I also ran PN_TRACE_FRM=1 on the sender and found that the
delivery-tag is set correctly on the way out, yet it is empty on the other
side. I am using proton 0.9.1, and using an Apache Apollo broker. The
messaging works fine using a RabbitMQ broker.



--
View this message in context: 
http://qpid.2158936.n2.nabble.com/Proton-c-Null-Messages-tp7625967p7626164.html
Sent from the Apache Qpid Proton mailing list archive at Nabble.com.


[jira] [Created] (PROTON-905) Long-lived connections leak sessions and links

2015-06-10 Thread Ken Giusti (JIRA)
Ken Giusti created PROTON-905:
-

 Summary: Long-lived connections leak sessions and links
 Key: PROTON-905
 URL: https://issues.apache.org/jira/browse/PROTON-905
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-c
Affects Versions: 0.9.1
Reporter: Ken Giusti
Priority: Blocker
 Fix For: 0.10


I found this issue while debugging a crash dump of qpidd.

Long lived connections do not free its sessions/link.

This only applies when NOT using the event model.  The version of qpidd I 
tested against (0.30) still uses the iterative model.  Point to consider, I 
don't know why this is the case.

Details:  I have a test script that opens a single connection, then continually 
creates sessions/links over that connection, sending one message before closing 
and freeing the sessions/links.  See attached.

Over time the qpidd run time consumes all memory on the system and is killed by 
OOM.  To be clear, I'm using drain to remove all sent messages - there is no 
message build up.

On debugging this, I'm finding thousands of session objects on the connections 
free sessions weakref list.  Every one of those sessions has a refcount of one.

Once the connection is finalized, all session objects are freed.  But until 
then, freed sessions continue to accumulate indefinitely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Proton-c Null Messages

2015-06-10 Thread dylan25
We're using Apache Apollo. Is there the possibility that Apollo has a
server-side configuration setting that limits message sizes?



--
View this message in context: 
http://qpid.2158936.n2.nabble.com/Proton-c-Null-Messages-tp7625967p7626159.html
Sent from the Apache Qpid Proton mailing list archive at Nabble.com.


Re: Proton-c Null Messages

2015-06-10 Thread Gordon Sim

On 06/09/2015 09:36 PM, logty wrote:

Can you give an example of how I would set the delivery tag?


You don't set the tag when using messenger, that should be done for you. 
What version of proton are you using? Are you using a broker or similar? 
Or just sending direct between two processes using proton-c?




[Resending] - Proton-J engine and thread safety

2015-06-10 Thread Kritikos, Alex
[Resending as it ended up in the wrong thread]

Hi all,

is the proton-j engine meant to be thread safe?
We have been experiencing some sporadic issues where under load, the engine 
sends callbacks to registered handlers with null as the event. We do not have a 
standalone repro case yet but just wondered what other people’s experience is 
as well as what are the recommendations around thread safety.

Thanks,

Alex Kritikos
Software AG
This communication contains information which is confidential and may also be 
privileged. It is for the exclusive use of the intended recipient(s). If you 
are not the intended recipient(s), please note that any distribution, copying, 
or use of this communication or the information in it, is strictly prohibited. 
If you have received this communication in error please notify us by e-mail and 
then delete the e-mail and any copies of it.
Software AG (UK) Limited Registered in England  Wales 1310740 - 
http://www.softwareag.com/uk



Re: C++ binding naming conventions: Qpid vs. C++

2015-06-10 Thread aconway
On Tue, 2015-06-09 at 19:56 +0100, Gordon Sim wrote:
 On 06/09/2015 07:47 PM, aconway wrote:
  C++ standard library uses lowercase_and_underscores, but Qpid C++
  projects to date use JavaWobbleCaseIndentifiers. Is the C++ binding 
  the
  time to start writing C++ like C++ programmers? Or will somebody's 
  head
  explode if class names start with a lower case letter?
  
  In particular since the proton C library is written in typical
  c_style_with_underscores, I am finding the CamelCase in the C++ 
  binding
  to be an ugly clash.
 
 I agree and I would go with underscores (and I'm largely responsible 
 for 
 the poor choice in qpid-cpp, sorry!).
 

Woo-hoo! From the horses mouth ;) Anyone know a good C++ de-camelcasing
script?




Re: C++ binding naming conventions: Qpid vs. C++

2015-06-10 Thread Chuck Rolke
The .NET binding on top of Qpid C++ Messaging library had the same problem.
cjansen suggested that the binding present a naming convention consistent
with what the binding users might expect. So that binding did not simply
copy all the C++ function and variable names but renamed them along the way.

If you do a one-to-one mapping it's sometimes easier to see what exactly
the function and variable mapping is. When stuff is renamed it's harder.

You are so early in the dev cycle that you can be consistent in whatever
form you choose.

- Original Message -
 From: aconway acon...@redhat.com
 To: proton proton@qpid.apache.org
 Sent: Tuesday, June 9, 2015 2:47:06 PM
 Subject: C++ binding naming conventions: Qpid vs. C++
 
 C++ standard library uses lowercase_and_underscores, but Qpid C++
 projects to date use JavaWobbleCaseIndentifiers. Is the C++ binding the
 time to start writing C++ like C++ programmers? Or will somebody's head
 explode if class names start with a lower case letter?
 
 In particular since the proton C library is written in typical
 c_style_with_underscores, I am finding the CamelCase in the C++ binding
 to be an ugly clash.
 
 DoesAnybodyReallyThinkThis is_easier_to_read_than_this?
 
 Cheers,
 Alan.
 


Re: C++ binding naming conventions: Qpid vs. C++

2015-06-10 Thread Gordon Sim

On 06/10/2015 02:25 PM, aconway wrote:

Woo-hoo! From the horses mouth ;)


In this case it's the other end of the horse that would be a more apt 
description!




Re: Strange behaviour for pn_messenger_send on CentOS 6

2015-06-10 Thread Darryl L. Pierce
On Wed, Jun 10, 2015 at 02:17:23PM +0100, Frank Quinn wrote:
 You can recreate this on a CentOS 6.6 box by installing qpid-proton-c-devel
 using yum from EPEL, then compiling the example application that comes with
 it in /usr/share/proton/examples/messenger/send.c.
 
 There is no broker with these example applications - it's point to point.
 
 There is a matching recv.c application there too. If you start recv first,
 it works fine, but if you don't, the send application hangs which I believe
 is new behaviour.
 
 Again, this does not happen on my Fedora laptop - only on CentOS 6.

I'm able to recreate this on RHEL6 as well using the RPMs from EPEL. I
see the connection aborted notice in the trace and then it fails to
exit. It appears to be repeatedly calling pn_transport_closed(). The
stack trace I see on EL6 is:

(gdb) backtrace
#0  pn_transport_closed (transport=0x804fa28)
at /home/mcpierce/Programming/Proton/proton-c/src/transport/transport.c:2802
#1  0x00140ea8 in pni_connection_capacity (ctx=0x804f920)
at /home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:166
#2  pni_connection_update (ctx=0x804f920) at 
/home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:196
#3  pni_conn_modified (ctx=0x804f920) at 
/home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:225
#4  0x00141071 in pn_messenger_process_transport (messenger=0x804cb40, 
event=0x805c520)
at /home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:1201
#5  0x00141134 in pn_messenger_process_events (messenger=0x804cb40)
at /home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:1252
#6  0x00141f83 in pni_connection_readable (sel=0x804f958)
at /home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:261
#7  0x00143425 in pn_selectable_readable (selectable=0x804f958)
at /home/mcpierce/Programming/Proton/proton-c/src/selectable.c:204
#8  0x00141483 in pn_messenger_process (messenger=0x804cb40)
at /home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:1310
#9  0x001415c8 in pn_messenger_tsync (messenger=0x804cb40, predicate=0x13da40 
pn_messenger_sent, timeout=-1)
at /home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:1379
#10 0x00141a97 in pn_messenger_sync (messenger=0x804cb40, predicate=0x13da40 
pn_messenger_sent)
at /home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:1410
#11 0x00141c8c in pn_messenger_send (messenger=0x804cb40, n=-1)
at /home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:2119
#12 0x08048e59 in main (argc=-72537468, argv=0xb7ffe000)
at /home/mcpierce/Programming/Proton/examples/c/messenger/send.c:102

The same backtrace on F22 is:

(gdb) backtrace
#0  pn_transport_closed (transport=transport@entry=0x60ac40)
at /home/mcpierce/Programming/Proton/proton-c/src/transport/transport.c:2801
#1  0x77bbf0a8 in pni_connection_capacity (sel=0x60aaf0)
at /home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:166
#2  pni_connection_update (sel=0x60aaf0) at 
/home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:196
#3  pni_conn_modified (ctx=0x60aa80) at 
/home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:225
#4  0x77bbf135 in pn_messenger_process_transport 
(messenger=messenger@entry=0x605970, 
event=event@entry=0x61a240) at 
/home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:1201
#5  0x77bbf27b in pn_messenger_process_events 
(messenger=messenger@entry=0x605970)
at /home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:1252
#6  0x77bbf6d9 in pni_connection_readable (sel=0x60aaf0)
at /home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:261
#7  0x77bbf7e8 in pn_messenger_process 
(messenger=messenger@entry=0x605970)
at /home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:1310
#8  0x77bbf94f in pn_messenger_tsync (messenger=0x605970, 
predicate=0x77bbc420 pn_messenger_sent, 
timeout=-1) at 
/home/mcpierce/Programming/Proton/proton-c/src/messenger/messenger.c:1379
#9  0x00401158 in main (argc=optimized out, argv=optimized out)
at /home/mcpierce/Programming/Proton/examples/c/messenger/send.c:102

There's definitely a difference in the runtime behavior of Fedora vs.
RHEL in this case.

-- 
Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc.
Delivering value year after year.
Red Hat ranks #1 in value among software vendors.
http://www.redhat.com/promo/vendor/



pgpS6Fao6o8uN.pgp
Description: PGP signature


Re: C++ binding naming conventions: Qpid vs. C++

2015-06-10 Thread aconway
On Wed, 2015-06-10 at 09:41 -0400, Chuck Rolke wrote:
 The .NET binding on top of Qpid C++ Messaging library had the same 
 problem.
 cjansen suggested that the binding present a naming convention 
 consistent
 with what the binding users might expect. So that binding did not 
 simply
 copy all the C++ function and variable names but renamed them along 
 the way.
 
 If you do a one-to-one mapping it's sometimes easier to see what 
 exactly
 the function and variable mapping is. When stuff is renamed it's 
 harder.
 
 You are so early in the dev cycle that you can be consistent in 
 whatever
 form you choose.

Yup. C++ does not have such strong naming traditions as some languages
since it sort of grew by accident and misadventure out of C and
originally did not have any standard library written in C++ to provide
an example. However these days there is a large and widely used std
library with a clear naming convention so I'm strongly tempted to go that
way.

 
 - Original Message -
  From: aconway acon...@redhat.com
  To: proton proton@qpid.apache.org
  Sent: Tuesday, June 9, 2015 2:47:06 PM
  Subject: C++ binding naming conventions: Qpid vs. C++
  
  C++ standard library uses lowercase_and_underscores, but Qpid C++
  projects to date use JavaWobbleCaseIndentifiers. Is the C++ binding 
  the
  time to start writing C++ like C++ programmers? Or will somebody's 
  head
  explode if class names start with a lower case letter?
  
  In particular since the proton C library is written in typical
  c_style_with_underscores, I am finding the CamelCase in the C++ 
  binding
  to be an ugly clash.
  
  DoesAnybodyReallyThinkThis is_easier_to_read_than_this?
  
  Cheers,
  Alan.
  


Re: [Resending] - Proton-J engine and thread safety

2015-06-10 Thread Timothy Bish
On 06/10/2015 10:18 AM, Kritikos, Alex wrote:
 Hi Alan,

 thanks for your response. We also use an engine per connection however there 
 are different read and write threads interacting with it and the issues only 
 occur under load.
 I guess we should try to create a repro case.

 Thanks,

 Alex Kritikos
 Software AG
 On 10 Jun 2015, at 16:50, aconway acon...@redhat.com wrote:

To my knowledge there is not expectation the Proton-J is any more thread
safe than Proton-C.  In both the new QPid JMS client and in the ActiveMQ
broker that uses Proton-J we serialize access to the engine for that
reason.  

 On Wed, 2015-06-10 at 09:34 +, Kritikos, Alex wrote:
 [Resending as it ended up in the wrong thread]

 Hi all,

 is the proton-j engine meant to be thread safe?
 The C engine is definitely NOT meant to be thread safe. Unless you have
 found an explicit written declaration that the java engine is supposed
 to be AND a bunch of code to back that up I wouldn't rely on it.

 The way we use proton in the C++ broker and in the upcoming Go binding
 is to create an engine per connection and serialize the action on each
 connection. In principle you can read and write from the OS connection
 concurrently but it's debatable how much you gain, you are likely just
 moving OS buffers into app buffers which is not a big win.

 The inbound and outbound protocol state *for a single connection* is
 pretty closely tied together. Proton is probably taking the right
 approach by assuming both are handled in a single concurrency context.

 The engine state for separate connections is *completely independent*
 so it's safe to run engines for separate connections in separte
 contexts.

 The recent reactor extensions to proton are interesting but not thread
 friendly. They force the protocol handling for multiple connections
 into a single thread context, which is great for single threaded apps
 but IMO the wrong way to go for concurrent apps.

 The go binding uses channels to pump data from connection read/write
 goroutines to a proton engine event loop goroutine per connection. The
 C++ broker predates the reactor and does it's own polling with
 read/write activity on an FD handled dispatched sequentially to worker
 threads so the proton engine for a connection is never used
 concurrently.

 There may be something interesting we can do at the proton layer to
 help with this pattern or it may be better to leave concurrency above
 the binding to be handled by the languages own concurrency tools, I am
 not sure yet.


 We have been experiencing some sporadic issues where under load, the
 engine sends callbacks to registered handlers with null as the event.
 We do not have a standalone repro case yet but just wondered what
 other people’s experience is as well as what are the recommendations
 around thread safety.

 Thanks,

 Alex Kritikos
 Software AG
 This communication contains information which is confidential and may
 also be privileged. It is for the exclusive use of the intended
 recipient(s). If you are not the intended recipient(s), please note
 that any distribution, copying, or use of this communication or the
 information in it, is strictly prohibited. If you have received this
 communication in error please notify us by e-mail and then delete the
 e-mail and any copies of it.
 Software AG (UK) Limited Registered in England  Wales 1310740 -
 http://www.softwareag.com/uk

 This communication contains information which is confidential and may also be 
 privileged. It is for the exclusive use of the intended recipient(s). If you 
 are not the intended recipient(s), please note that any distribution, 
 copying, or use of this communication or the information in it, is strictly 
 prohibited. If you have received this communication in error please notify us 
 by e-mail and then delete the e-mail and any copies of it.
 Software AG (UK) Limited Registered in England  Wales 1310740 - 
 http://www.softwareag.com/uk




-- 
Tim Bish
Sr Software Engineer | RedHat Inc.
tim.b...@redhat.com | www.redhat.com 
twitter: @tabish121
blog: http://timbish.blogspot.com/



Re: something rotten in the state of... something or other

2015-06-10 Thread Flavio Percoco

On 09/06/15 12:30 -0400, Ken Giusti wrote:

A betting man would wager it has something to do with the recent changes to the 
python setup.py.

I'll have a look into it.

- Original Message -

From: Gordon Sim g...@redhat.com
To: proton@qpid.apache.org
Sent: Tuesday, June 9, 2015 11:57:25 AM
Subject: something rotten in the state of... something or other

I've recently started seeing errors[1] when running tests due to left
over artefacts of previous builds. This happens even for a completely
clean build directory, as some of the offending artefacts seem to be
created in the source tree.

Jython seems to be trying and failing to load cproton. With a completely
clean source and build tree, everything passes, but it is kind of
annoying to have to rely on that. Is anyone else seeing anything
similar? Any ideas as to the cause (I've only seen it happening quite
recently) or possible cures?


I haven't been able to replicate this but I'm afraid it might be my
fault. Does it happen to you every time?

Thanks,
Flavio




[1]:

 ---
  T E S T S
 ---
 Running org.apache.qpid.proton.InteropTest
 Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.119 sec
 Running org.apache.qpid.proton.JythonTest
 2015-06-09 16:49:29.705 INFO About to call Jython test script:
 '/home/gordon/projects/proton-git/tests/python/proton-test' with
 '/home/gordon/projects/proton-git/tests/python' added to Jython path
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.207 sec
  FAILURE!
 test(org.apache.qpid.proton.JythonTest)  Time elapsed: 5.203 sec  
 FAILURE!
 java.lang.AssertionError: Caught PyException on invocation number 2:
 Traceback (most recent call last):
   File /home/gordon/projects/proton-git/tests/python/proton-test, line
   616, in module
 m = __import__(name, None, None, [dummy])
   File
   /home/gordon/projects/proton-git/tests/python/proton_tests/__init__.py,
   line 20, in module
 import proton_tests.codec
   File
   /home/gordon/projects/proton-git/tests/python/proton_tests/codec.py,
   line 20, in module
 import os, common, sys
   File
   /home/gordon/projects/proton-git/tests/python/proton_tests/common.py,
   line 26, in module
 from proton import Connection, Transport, SASL, Endpoint, Delivery, SSL
   File
   
/home/gordon/projects/proton-git/tests/../proton-c/bindings/python/proton/__init__.py,
   line 33, in module
 from cproton import *
   File
   
/home/gordon/projects/proton-git/tests/../proton-c/bindings/python/cproton.py,
   line 29, in module
 import _cproton
 ImportError: No module named _cproton
  with message: null
at org.junit.Assert.fail(Assert.java:93)
at org.apache.qpid.proton.JythonTest.runTestOnce(JythonTest.java:120)
at org.apache.qpid.proton.JythonTest.test(JythonTest.java:95)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at

org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at

org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at

org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at

org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at

org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at

org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at

org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at

org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at

org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at

org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at


Re: Proton-c Null Messages

2015-06-10 Thread Gordon Sim

On 06/10/2015 01:26 PM, dylan25 wrote:

We're using Apache Apollo. Is there the possibility that Apollo has a
server-side configuration setting that limits message sizes?


Even if it did, that wouldn't explain why it would send out a frame with 
no delivery tag. It sounds like it may be a bug in Apollo to me (or 
perhaps in the proton-j version used by Apollo).





Re: something rotten in the state of... something or other

2015-06-10 Thread aconway
On Tue, 2015-06-09 at 17:38 +0100, Robbie Gemmell wrote:
 I'm not seeing that currently, but I have seen similar sort of things
 a couple of times in the past.
 
 As you mention, some files get created in the source tree (presumably
 by or due to use of Jython), outwith the normal build areas they 
 would
 be (which would lead to them being cleaned up), and I think that is
 part of the problem sometimes. If the shim, binding or test files get
 updated in certain ways, some bits can become out of sync, leading to
 the type of issue you saw.
 
 CI doesnt see the issue as it blows away all unversioned files before
 each update. If I see it locally I've just used git-clean to tidy up
 my checkout. I'm not sure what we can do otherwise except put
 something togehter that targets all the extraneous files and removes
 them.

Better to fix dependencies so things get rebuilt properly than to
simply blow them away. Could it be broken swig dependencies leaving out
of date .py files around? I've noticed before that swig does not always
get re-run when it should be.

 Robbie
 
 On 9 June 2015 at 16:57, Gordon Sim g...@redhat.com wrote:
  I've recently started seeing errors[1] when running tests due to 
  left over
  artefacts of previous builds. This happens even for a completely 
  clean build
  directory, as some of the offending artefacts seem to be created in 
  the
  source tree.
  
  Jython seems to be trying and failing to load cproton. With a 
  completely
  clean source and build tree, everything passes, but it is kind of 
  annoying
  to have to rely on that. Is anyone else seeing anything similar? 
  Any ideas
  as to the cause (I've only seen it happening quite recently) or 
  possible
  cures?
  
  
  [1]:
  
   ---
T E S T S
   ---
   Running org.apache.qpid.proton.InteropTest
   Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
   0.119 sec
   Running org.apache.qpid.proton.JythonTest
   2015-06-09 16:49:29.705 INFO About to call Jython test script:
   '/home/gordon/projects/proton-git/tests/python/proton-test' with
   '/home/gordon/projects/proton-git/tests/python' added to Jython 
   path
   Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
   5.207 sec
FAILURE!
   test(org.apache.qpid.proton.JythonTest)  Time elapsed: 5.203 sec 

   FAILURE!
   java.lang.AssertionError: Caught PyException on invocation number 
   2:
   Traceback (most recent call last):
 File /home/gordon/projects/proton-git/tests/python/proton
   -test, line
   616, in module
   m = __import__(name, None, None, [dummy])
 File
   /home/gordon/projects/proton
   -git/tests/python/proton_tests/__init__.py,
   line 20, in module
   import proton_tests.codec
 File
   /home/gordon/projects/proton
   -git/tests/python/proton_tests/codec.py, line
   20, in module
   import os, common, sys
 File
   /home/gordon/projects/proton
   -git/tests/python/proton_tests/common.py, line
   26, in module
   from proton import Connection, Transport, SASL, Endpoint, 
   Delivery,
   SSL
 File
   /home/gordon/projects/proton-git/tests/../proton
   -c/bindings/python/proton/__init__.py,
   line 33, in module
   from cproton import *
 File
   /home/gordon/projects/proton-git/tests/../proton
   -c/bindings/python/cproton.py,
   line 29, in module
   import _cproton
   ImportError: No module named _cproton
with message: null
   at org.junit.Assert.fail(Assert.java:93)
   at
   org.apache.qpid.proton.JythonTest.runTestOnce(JythonTest.java:120
   )
   at 
   org.apache.qpid.proton.JythonTest.test(JythonTest.java:95)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
   Method)
   at
   sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorI
   mpl.java:57)
   at
   sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodA
   ccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at
   org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(Frame
   workMethod.java:45)
   at
   org.junit.internal.runners.model.ReflectiveCallable.run(Reflectiv
   eCallable.java:15)
   at
   org.junit.runners.model.FrameworkMethod.invokeExplosively(Framewo
   rkMethod.java:42)
   at
   org.junit.internal.runners.statements.InvokeMethod.evaluate(Invok
   eMethod.java:20)
   at 
   org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at
   org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4Clas
   sRunner.java:68)
   at
   org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4Clas
   sRunner.java:47)
   at 
   org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at 
   org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at
   

Re: Proton-c Null Messages

2015-06-10 Thread aconway
On Tue, 2015-06-09 at 19:54 +0100, Gordon Sim wrote:
 On 06/09/2015 06:40 PM, logty wrote:
  When I run the client I get:
  
  [0x5351db0]:0 - @transfer(20) [handle=0, delivery-id=0, delivery
  -tag=b,
  message-format=0, settled=true, more=true] (16363) 
  \x00Sp\xc0\x07\x05B...
 
 My guess would be that it is the delivery tag being null (or empty, 
 can't tell which) that is the problem. From the spec:
 
  This field MUST be specified for the first transfer of
  a multi-transfer message and can only be omitted for
  continuation transfers. [section 2.7.5]
 
 So I think that whatever is sending that frame has a bug. Proton-c 
 has a 
 bug too of course, since it shouldn't segfault but should close the 
 connection with a framing-error or similar.

It says the field must be specified, it does not say it must not be an
empty binary value. Is the field really missing or is proton choking on
a 0-length delivery tag? It shouldn't, which might explain why rabbit is OK with
it.

  And then the segfault occurs when transfering a 5 MB message, and 
  it is only
  coming through as this 16 KB message.
 


Re: [Resending] - Proton-J engine and thread safety

2015-06-10 Thread Kritikos, Alex
Hi Alan,

thanks for your response. We also use an engine per connection however there 
are different read and write threads interacting with it and the issues only 
occur under load.
I guess we should try to create a repro case.

Thanks,

Alex Kritikos
Software AG
On 10 Jun 2015, at 16:50, aconway acon...@redhat.com wrote:

 On Wed, 2015-06-10 at 09:34 +, Kritikos, Alex wrote:
 [Resending as it ended up in the wrong thread]

 Hi all,

 is the proton-j engine meant to be thread safe?

 The C engine is definitely NOT meant to be thread safe. Unless you have
 found an explicit written declaration that the java engine is supposed
 to be AND a bunch of code to back that up I wouldn't rely on it.

 The way we use proton in the C++ broker and in the upcoming Go binding
 is to create an engine per connection and serialize the action on each
 connection. In principle you can read and write from the OS connection
 concurrently but it's debatable how much you gain, you are likely just
 moving OS buffers into app buffers which is not a big win.

 The inbound and outbound protocol state *for a single connection* is
 pretty closely tied together. Proton is probably taking the right
 approach by assuming both are handled in a single concurrency context.

 The engine state for separate connections is *completely independent*
 so it's safe to run engines for separate connections in separte
 contexts.

 The recent reactor extensions to proton are interesting but not thread
 friendly. They force the protocol handling for multiple connections
 into a single thread context, which is great for single threaded apps
 but IMO the wrong way to go for concurrent apps.

 The go binding uses channels to pump data from connection read/write
 goroutines to a proton engine event loop goroutine per connection. The
 C++ broker predates the reactor and does it's own polling with
 read/write activity on an FD handled dispatched sequentially to worker
 threads so the proton engine for a connection is never used
 concurrently.

 There may be something interesting we can do at the proton layer to
 help with this pattern or it may be better to leave concurrency above
 the binding to be handled by the languages own concurrency tools, I am
 not sure yet.


 We have been experiencing some sporadic issues where under load, the
 engine sends callbacks to registered handlers with null as the event.
 We do not have a standalone repro case yet but just wondered what
 other people’s experience is as well as what are the recommendations
 around thread safety.

 Thanks,

 Alex Kritikos
 Software AG
 This communication contains information which is confidential and may
 also be privileged. It is for the exclusive use of the intended
 recipient(s). If you are not the intended recipient(s), please note
 that any distribution, copying, or use of this communication or the
 information in it, is strictly prohibited. If you have received this
 communication in error please notify us by e-mail and then delete the
 e-mail and any copies of it.
 Software AG (UK) Limited Registered in England  Wales 1310740 -
 http://www.softwareag.com/uk


This communication contains information which is confidential and may also be 
privileged. It is for the exclusive use of the intended recipient(s). If you 
are not the intended recipient(s), please note that any distribution, copying, 
or use of this communication or the information in it, is strictly prohibited. 
If you have received this communication in error please notify us by e-mail and 
then delete the e-mail and any copies of it.
Software AG (UK) Limited Registered in England  Wales 1310740 - 
http://www.softwareag.com/uk



Re: [Resending] - Proton-J engine and thread safety

2015-06-10 Thread aconway
On Wed, 2015-06-10 at 14:18 +, Kritikos, Alex wrote:
 Hi Alan,
 
 thanks for your response. We also use an engine per connection 
 however there are different read and write threads interacting with 
 it and the issues only occur under load.
 I guess we should try to create a repro case.

You need to serialize the read and write threads, the engine is not
safe for concurrent use at all. My blabbering about read and write
concurrency may have misled you. 

You could simply mutex-lock the engine in your read/write threads.
Depending on what else you are doing beware contention and deadlocks.

The C++ (and I think Java) brokers handle this in the poller: we take
the FD out of the poller on a read or write event, do the relevant
proton work, then put it back to get further events. That way you can't
have concurrent read/write on an engine.



 
 Thanks,
 
 Alex Kritikos
 Software AG
 On 10 Jun 2015, at 16:50, aconway acon...@redhat.com wrote:
 
  On Wed, 2015-06-10 at 09:34 +, Kritikos, Alex wrote:
   [Resending as it ended up in the wrong thread]
   
   Hi all,
   
   is the proton-j engine meant to be thread safe?
  
  The C engine is definitely NOT meant to be thread safe. Unless you 
  have
  found an explicit written declaration that the java engine is 
  supposed
  to be AND a bunch of code to back that up I wouldn't rely on it.
  
  The way we use proton in the C++ broker and in the upcoming Go 
  binding
  is to create an engine per connection and serialize the action on 
  each
  connection. In principle you can read and write from the OS 
  connection
  concurrently but it's debatable how much you gain, you are likely 
  just
  moving OS buffers into app buffers which is not a big win.
  
  The inbound and outbound protocol state *for a single connection* 
  is
  pretty closely tied together. Proton is probably taking the right
  approach by assuming both are handled in a single concurrency 
  context.
  
  The engine state for separate connections is *completely 
  independent*
  so it's safe to run engines for separate connections in separte
  contexts.
  
  The recent reactor extensions to proton are interesting but not 
  thread
  friendly. They force the protocol handling for multiple connections
  into a single thread context, which is great for single threaded 
  apps
  but IMO the wrong way to go for concurrent apps.
  
  The go binding uses channels to pump data from connection 
  read/write
  goroutines to a proton engine event loop goroutine per connection. 
  The
  C++ broker predates the reactor and does it's own polling with
  read/write activity on an FD handled dispatched sequentially to 
  worker
  threads so the proton engine for a connection is never used
  concurrently.
  
  There may be something interesting we can do at the proton layer to
  help with this pattern or it may be better to leave concurrency 
  above
  the binding to be handled by the languages own concurrency tools, I 
  am
  not sure yet.
  
  
   We have been experiencing some sporadic issues where under load, 
   the
   engine sends callbacks to registered handlers with null as the 
   event.
   We do not have a standalone repro case yet but just wondered what
   other people’s experience is as well as what are the 
   recommendations
   around thread safety.
   
   Thanks,
   
   Alex Kritikos
   Software AG
   This communication contains information which is confidential and 
   may
   also be privileged. It is for the exclusive use of the intended
   recipient(s). If you are not the intended recipient(s), please 
   note
   that any distribution, copying, or use of this communication or 
   the
   information in it, is strictly prohibited. If you have received 
   this
   communication in error please notify us by e-mail and then delete 
   the
   e-mail and any copies of it.
   Software AG (UK) Limited Registered in England  Wales 1310740 -
   http://www.softwareag.com/uk
   
 
 This communication contains information which is confidential and may 
 also be privileged. It is for the exclusive use of the intended 
 recipient(s). If you are not the intended recipient(s), please note 
 that any distribution, copying, or use of this communication or the 
 information in it, is strictly prohibited. If you have received this 
 communication in error please notify us by e-mail and then delete the 
 e-mail and any copies of it.
 Software AG (UK) Limited Registered in England  Wales 1310740 - 
 http://www.softwareag.com/uk
 


Re: something rotten in the state of... something or other

2015-06-10 Thread Gordon Sim

On 06/10/2015 03:24 PM, Flavio Percoco wrote:

On 09/06/15 12:30 -0400, Ken Giusti wrote:

A betting man would wager it has something to do with the recent
changes to the python setup.py.

I'll have a look into it.

- Original Message -

From: Gordon Sim g...@redhat.com
To: proton@qpid.apache.org
Sent: Tuesday, June 9, 2015 11:57:25 AM
Subject: something rotten in the state of... something or other

I've recently started seeing errors[1] when running tests due to left
over artefacts of previous builds. This happens even for a completely
clean build directory, as some of the offending artefacts seem to be
created in the source tree.

Jython seems to be trying and failing to load cproton. With a completely
clean source and build tree, everything passes, but it is kind of
annoying to have to rely on that. Is anyone else seeing anything
similar? Any ideas as to the cause (I've only seen it happening quite
recently) or possible cures?


I haven't been able to replicate this but I'm afraid it might be my
fault. Does it happen to you every time?


Pretty much, yes. On more digging, it appears that the 
proton-c/bindings/python/cproton.py file in the source tree is causing 
the problem. It seems to get generated when running the 
python-tox-tests, and once its there the proton-java tests fail with the 
error pasted previously.


If I delete that file and also delete the .pyc, .class and .pyo objects 
in the source tree, then the java tests pass again.


Re: Proton-c Null Messages

2015-06-10 Thread Gordon Sim

On 06/10/2015 04:01 PM, aconway wrote:

On Tue, 2015-06-09 at 19:54 +0100, Gordon Sim wrote:

On 06/09/2015 06:40 PM, logty wrote:

When I run the client I get:

[0x5351db0]:0 - @transfer(20) [handle=0, delivery-id=0, delivery
-tag=b,
message-format=0, settled=true, more=true] (16363)
\x00Sp\xc0\x07\x05B...


My guess would be that it is the delivery tag being null (or empty,
can't tell which) that is the problem. From the spec:

  This field MUST be specified for the first transfer of
  a multi-transfer message and can only be omitted for
  continuation transfers. [section 2.7.5]

So I think that whatever is sending that frame has a bug. Proton-c
has a
bug too of course, since it shouldn't segfault but should close the
connection with a framing-error or similar.


It says the field must be specified, it does not say it must not be an
empty binary value. Is the field really missing or is proton choking on
a 0-length delivery tag?


I'm not sure the distinction between null and an empty value is very 
useful here. The intent is that the delivery is clearly identified. I 
would argue that a 'zero byte identifier' doesn't meet the spirit of the 
law there.



It shouldn't, which might explain why rabbit is OK with
it.


I don't think RabbitMQ is ever seeing that frame. I believe that frame 
is emitted by ApolloMQ to the receiving client.


I agree that proton should not choke on a zero byte delivery tag (or 
indeed on a non-existent delivery tag). But I do think it's a bug to 
send such a frame.


(I should say of course that this is all still somewhat speculative, 
based only on a snippet of protocol trace and this thread. I haven't 
actually reproduced myself to verify the bad behavior in ApolloMQ or 
that the crash in proton is caused by the delivery tag value).




[jira] [Commented] (PROTON-906) Would be nice to make durable subscriptions simpler

2015-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14580693#comment-14580693
 ] 

ASF subversion and git services commented on PROTON-906:


Commit f252261b971f9bcc64dc4c54de95f9429c5766bc in qpid-proton's branch 
refs/heads/master from [~gsim]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-proton.git;h=f252261 ]

PROTON-906: add DurableSubscription option utility


 Would be nice to make durable subscriptions simpler
 ---

 Key: PROTON-906
 URL: https://issues.apache.org/jira/browse/PROTON-906
 Project: Qpid Proton
  Issue Type: Improvement
  Components: python-binding
Affects Versions: 0.9.1
Reporter: Gordon Sim
Assignee: Gordon Sim
Priority: Minor
 Fix For: 0.10


 To get behaviour similar to that of the proton based JMS client's durable 
 subscriptions, the durability and expiry-policy need to be set. Providing a 
 simple option that shields the user from the detailed spec knowledge where 
 desired would be a nice improvement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PROTON-906) Would be nice to make durable subscriptions simpler

2015-06-10 Thread Gordon Sim (JIRA)

 [ 
https://issues.apache.org/jira/browse/PROTON-906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gordon Sim resolved PROTON-906.
---
Resolution: Fixed

 Would be nice to make durable subscriptions simpler
 ---

 Key: PROTON-906
 URL: https://issues.apache.org/jira/browse/PROTON-906
 Project: Qpid Proton
  Issue Type: Improvement
  Components: python-binding
Affects Versions: 0.9.1
Reporter: Gordon Sim
Assignee: Gordon Sim
Priority: Minor
 Fix For: 0.10


 To get behaviour similar to that of the proton based JMS client's durable 
 subscriptions, the durability and expiry-policy need to be set. Providing a 
 simple option that shields the user from the detailed spec knowledge where 
 desired would be a nice improvement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Proton-c Null Messages

2015-06-10 Thread Gordon Sim

On 06/10/2015 07:26 PM, logty wrote:

The odd thing is the sender is specifying the delivery-id, here is the output
of PN_TRACE_FRM=1 on the sender side:

[0x86b6580]:0 - @transfer(20) [handle=0, delivery-id=0,
delivery-tag=b\x00\x00\x00\x00\x00\x00\x00\x00, message-format=0,
settled=true, more=false] (5243441) \x00Sp\xd0\x00\x00\x00\x0b\x00\x...


The delivery tage used by the broker when sending the message out to a 
consuming client is not (or need not) be the same as the one used by the 
publishing client.



This is the output from one of the tests, but I can replicate it just using
example programs and inputing large amounts of data, yet even with small
amounts of data the delivery-tag is still not being set when sent to a
reciever.

Here is the sender:
[0x17471b0]:0 - @transfer(20) [handle=0, delivery-id=3,
delivery-tag=b\x03\x00\x00\x00\x00\x00\x00\x00, message-format=0,
settled=false, more=false] (109)
\x00Sp\xd0\x00\x00\x00\x0b\x00\x00\x00\x05BP\x04@BR\x00\x00Ss\xd0\x00\x00\x00L\x00\x00\x00\x0d@@\xa1$amqp://127.0.0.1:61613/topic://event\xa1\x04Test\x83\x00\x00\x00\x00\x00\x00\x00\x00\x83\x00\x00\x00\x00\x00\x00\x00\x00@R\x00@\x00Sw\xa1\x01a

Here is the reciever:
[0x1de3ac0]:0 - @transfer(20) [handle=0, delivery-id=5, delivery-tag=b0,


This is ok, that is a 1 byte tage with the ascii value '0' I believe. 
However the original trace  commented on had: delivery-tag=b, which is 
a zero byte value.



message-format=0] (122)
\x00Sp\xc0\x07\x05BP\x04@BC\x00Ss\xc0G\x0c@@\xa1$amqp://127.0.0.1:61613/topic://event\xa1\x04Test\x83\x00\x00\x00\x00\x00\x00\x00\x00\x83\x00\x00\x00\x00\x00\x00\x00\x00@C\x00Sw\xa1\x01a\x00Sx\xc1\x17\x02\xa3\x06origin\xa1\x0cmybroker-43c



--
View this message in context: 
http://qpid.2158936.n2.nabble.com/Proton-c-Null-Messages-tp7625967p7626224.html
Sent from the Apache Qpid Proton mailing list archive at Nabble.com.





[jira] [Assigned] (PROTON-905) Long-lived connections leak sessions and links

2015-06-10 Thread Ken Giusti (JIRA)

 [ 
https://issues.apache.org/jira/browse/PROTON-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ken Giusti reassigned PROTON-905:
-

Assignee: Ken Giusti

 Long-lived connections leak sessions and links
 --

 Key: PROTON-905
 URL: https://issues.apache.org/jira/browse/PROTON-905
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-c
Affects Versions: 0.9.1
Reporter: Ken Giusti
Assignee: Ken Giusti
Priority: Blocker
 Fix For: 0.10

 Attachments: test-send.py


 I found this issue while debugging a crash dump of qpidd.
 Long lived connections do not free its sessions/link.
 This only applies when NOT using the event model.  The version of qpidd I 
 tested against (0.30) still uses the iterative model.  Point to consider, I 
 don't know why this is the case.
 Details:  I have a test script that opens a single connection, then 
 continually creates sessions/links over that connection, sending one message 
 before closing and freeing the sessions/links.  See attached.
 Over time the qpidd run time consumes all memory on the system and is killed 
 by OOM.  To be clear, I'm using drain to remove all sent messages - there is 
 no message build up.
 On debugging this, I'm finding thousands of session objects on the 
 connections free sessions weakref list.  Every one of those sessions has a 
 refcount of one.
 Once the connection is finalized, all session objects are freed.  But until 
 then, freed sessions continue to accumulate indefinitely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Proton-c Null Messages

2015-06-10 Thread logty
The odd thing is the sender is specifying the delivery-id, here is the output
of PN_TRACE_FRM=1 on the sender side:

[0x86b6580]:0 - @transfer(20) [handle=0, delivery-id=0,
delivery-tag=b\x00\x00\x00\x00\x00\x00\x00\x00, message-format=0,
settled=true, more=false] (5243441) \x00Sp\xd0\x00\x00\x00\x0b\x00\x...

This is the output from one of the tests, but I can replicate it just using
example programs and inputing large amounts of data, yet even with small
amounts of data the delivery-tag is still not being set when sent to a
reciever.

Here is the sender:
[0x17471b0]:0 - @transfer(20) [handle=0, delivery-id=3,
delivery-tag=b\x03\x00\x00\x00\x00\x00\x00\x00, message-format=0,
settled=false, more=false] (109)
\x00Sp\xd0\x00\x00\x00\x0b\x00\x00\x00\x05BP\x04@BR\x00\x00Ss\xd0\x00\x00\x00L\x00\x00\x00\x0d@@\xa1$amqp://127.0.0.1:61613/topic://event\xa1\x04Test\x83\x00\x00\x00\x00\x00\x00\x00\x00\x83\x00\x00\x00\x00\x00\x00\x00\x00@R\x00@\x00Sw\xa1\x01a

Here is the reciever:
[0x1de3ac0]:0 - @transfer(20) [handle=0, delivery-id=5, delivery-tag=b0,
message-format=0] (122)
\x00Sp\xc0\x07\x05BP\x04@BC\x00Ss\xc0G\x0c@@\xa1$amqp://127.0.0.1:61613/topic://event\xa1\x04Test\x83\x00\x00\x00\x00\x00\x00\x00\x00\x83\x00\x00\x00\x00\x00\x00\x00\x00@C\x00Sw\xa1\x01a\x00Sx\xc1\x17\x02\xa3\x06origin\xa1\x0cmybroker-43c



--
View this message in context: 
http://qpid.2158936.n2.nabble.com/Proton-c-Null-Messages-tp7625967p7626224.html
Sent from the Apache Qpid Proton mailing list archive at Nabble.com.


[jira] [Commented] (PROTON-866) Implement SASL external with TLS client authentication

2015-06-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14581066#comment-14581066
 ] 

ASF subversion and git services commented on PROTON-866:


Commit 1cfeef1c03d4607844320320ab50054f750f3aa8 in qpid-proton's branch 
refs/heads/master from [~astitcher]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-proton.git;h=1cfeef1 ]

PROTON-866: Tell SASL the external ssf and authid when we detect SASL
- Add Internal API to set external ssf/authid to SASL


 Implement SASL external with TLS client authentication
 --

 Key: PROTON-866
 URL: https://issues.apache.org/jira/browse/PROTON-866
 Project: Qpid Proton
  Issue Type: Sub-task
  Components: proton-c
Reporter: Andrew Stitcher
Assignee: Andrew Stitcher





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)