Re: [VOTE]: Release Proton 0.5 RC3 as 0.5 final
[X ] Yes, release 0.5 RC3 as 0.5 final [ ] No, 0.5 RC3 has the following issues...
Re: 0.5 RC1
Note that I've just committed a change for PROTON-270 which means the Maven artifactId for the tests module is now proton-tests rather than just tests. Presumably you are happy to include this in 0.5. Phil On 6 August 2013 13:34, Ken Giusti kgiu...@redhat.com wrote: We are planning on creating an 0.5 branch at some point, correct? I ask as we've branched all preceding releases - I'd expect the same for 0.5. thanks, -K - Original Message - From: Rafael Schloming r...@alum.mit.edu To: proton@qpid.apache.org Sent: Tuesday, August 6, 2013 6:38:11 AM Subject: Re: 0.5 RC1 The svn info for every release artifact is captured in the artifact itself in the SVN_INFO file, in this case it is: Repo: http://svn.apache.org/repos/asf/qpid/proton Branch: trunk Revision: 1510659 I don't anticipate anything going into trunk that won't appear in 0.5 final, however if necessary we can branch. Either way, I would test the tarball as that form is what we will be releasing and it is possible that build/installation stuff might work fine from svn but have issues when running from the tarball. --Rafael On Tue, Aug 6, 2013 at 3:07 AM, Bozo Dragojevic bo...@digiverse.si wrote: On 5. 08. 13 21:14, Rafael Schloming wrote: As promised, here is 0.5 RC1: http://people.apache.org/~rhs/**qpid-proton-0.5rc1/ http://people.apache.org/~rhs/qpid-proton-0.5rc1/ Java binaries are here: https://repository.apache.org/**content/repositories/** orgapacheqpid-064/ https://repository.apache.org/content/repositories/orgapacheqpid-064/ --Rafael Rafael, which svn revision corresponds to this? r1510511 or current top-of-trunk r1510646? should I be testing the tar or top-of-trunk? will trunk get fixes that will *not* appear in 0.5 final? Thanks, Bozzo -- -K
Re: proton-j API factory simplification.
I agree that o.a.q.p.Proton is, overall, an improvement. I was partly responsible for creating the ProtonFactoryLoader and XXXFactory classes, and acknowledge that they make life too hard for the user. This was a result of trying to meet the following design goals: 1. User code should not need to have a compile-time dependency on any proton-c/j/jni classes. Given our current separation of the proton-api from the proton-impl/proton-jni modules, it means user code should only depend on proton-api at compile-time. 2. Classes from the various top level packages, such as engine, messenger etc, should be kept separate unless they really need to be together. I still believe in goal 1 (though this will be discussed at greater length on the related thread [1]), but am relaxed about item 2. So, I'd be in favour of Hiram's proposal if ProtonJ and ProtonC reside in proton-api.jar. This would be very easy to do, e.g. public class ProtonJ extends Proton { ... public ProtonJ() { engineFactory = new ProtonFactoryLoaderEngineFactory(EngineFactory.class, PROTON_J); ... } ... } Phil [1] http://qpid.2158936.n2.nabble.com/Java-Packaging-Organizational-Issues-tt7596353.html On 1 August 2013 18:18, Rafael Schloming r...@alum.mit.edu wrote: I like this idea. Right now I'm at a loss to understand what all the factory business is for, and I'm actually pretty familiar with the codebase. I don't think our users stand a snowballs chance in hell of sorting through the myriad of factories, factory impls, service loaders, and service loader impls needed in order to get started with even a simple example. The current Proton.java class is a step in the right direction, however with all the other factories lying around it kind of gets lost in the noise. It would be good if we could enforce a single entry point at the code level, and what you're describing sounds like it would be pretty simple/easy to explain to users. It would be nice if we could get to the point where we have only one public entry point class inside each impl. IMHO, that would make the API way more discoverable even with only minimal javadoc. --Rafael On Thu, Aug 1, 2013 at 12:50 PM, Hiram Chirino hi...@hiramchirino.com wrote: Hi folks, I was just thinking perhaps we should simplify all the factory stuff in the proton API. Mostly get rid of it. Don't think it's really needed. Mainly I think we need to make Proton an interface and let folks assign it the desired implementation. Something like: Proton p = new ProtonJ(); or Proton p = new ProtonC(); where ProtonJ and ProtonC are in the respective implementation jars. if folks really want to make it configurable, they can easily build an if statement to pick the impl that they desire. -- Hiram Chirino Engineering | Red Hat, Inc. hchir...@redhat.com | fusesource.com | redhat.com skype: hiramchirino | twitter: @hiramchirino blog: Hiram Chirino's Bit Mojo
Re: proton-j API factory simplification.
Corrected typo in the code inline: On 2 August 2013 09:44, Phil Harvey p...@philharveyonline.com wrote: I agree that o.a.q.p.Proton is, overall, an improvement. I was partly responsible for creating the ProtonFactoryLoader and XXXFactory classes, and acknowledge that they make life too hard for the user. This was a result of trying to meet the following design goals: 1. User code should not need to have a compile-time dependency on any proton-c/j/jni classes. Given our current separation of the proton-api from the proton-impl/proton-jni modules, it means user code should only depend on proton-api at compile-time. 2. Classes from the various top level packages, such as engine, messenger etc, should be kept separate unless they really need to be together. I still believe in goal 1 (though this will be discussed at greater length on the related thread [1]), but am relaxed about item 2. So, I'd be in favour of Hiram's proposal if ProtonJ and ProtonC reside in proton-api.jar. This would be very easy to do, e.g. public class ProtonJ extends Proton { ... public ProtonJ() { engineFactory = new ProtonFactoryLoaderEngineFactory(EngineFactory.class, PROTON_J); // oops, should have been: engineFactory = new ProtonFactoryLoaderEngineFactory(EngineFactory.class, PROTON_J).loadFactory(); ... } ... } Phil [1] http://qpid.2158936.n2.nabble.com/Java-Packaging-Organizational-Issues-tt7596353.html On 1 August 2013 18:18, Rafael Schloming r...@alum.mit.edu wrote: I like this idea. Right now I'm at a loss to understand what all the factory business is for, and I'm actually pretty familiar with the codebase. I don't think our users stand a snowballs chance in hell of sorting through the myriad of factories, factory impls, service loaders, and service loader impls needed in order to get started with even a simple example. The current Proton.java class is a step in the right direction, however with all the other factories lying around it kind of gets lost in the noise. It would be good if we could enforce a single entry point at the code level, and what you're describing sounds like it would be pretty simple/easy to explain to users. It would be nice if we could get to the point where we have only one public entry point class inside each impl. IMHO, that would make the API way more discoverable even with only minimal javadoc. --Rafael On Thu, Aug 1, 2013 at 12:50 PM, Hiram Chirino hi...@hiramchirino.com wrote: Hi folks, I was just thinking perhaps we should simplify all the factory stuff in the proton API. Mostly get rid of it. Don't think it's really needed. Mainly I think we need to make Proton an interface and let folks assign it the desired implementation. Something like: Proton p = new ProtonJ(); or Proton p = new ProtonC(); where ProtonJ and ProtonC are in the respective implementation jars. if folks really want to make it configurable, they can easily build an if statement to pick the impl that they desire. -- Hiram Chirino Engineering | Red Hat, Inc. hchir...@redhat.com | fusesource.com | redhat.com skype: hiramchirino | twitter: @hiramchirino blog: Hiram Chirino's Bit Mojo
Re: Qpid-specific logging facade(s) for Proton etc
I have just committed the first revision [1] of the Proton logging Java classes under PROTON-343. Among the tasks remaining, the bulk of the work will be in proton-c and proton-jni: 1. Defining and implementing proton-c logging functions in line with the new Java API. 2. Implementing proton-jni’s logging methods to allow it to pass a Java logger callback to proton-c. ** ** Does anyone have a view on how the proton-c functions should look? Any volunteers for implementing them? ** ** The other outstanding tasks are: - Define the full set of logging functions in EngineLogger, MessengerLogger et al. - Modify existing Proton classes to actually use the new logging classes.*** * ** ** Phil ** ** [1] https://svn.apache.org/r1501276 On 25 June 2013 13:25, Rob Godfrey rob.j.godf...@gmail.com wrote: So, my main comment would be that I think the Factories should not be depending on the MessageLoggerSpi as you've defined it, but instead purely on EngineLogger. The MessageLogger stuff is a convenience but I don't think it should be mandatory to use it. My other comment would be that I don't think ProtonCategory should be defining the qualified name I think that would the specific to the implementation of the logging. -- Rob On 25 June 2013 13:28, Phil Harvey p...@philharveyonline.com wrote: I've created a skeleton Java implementation of the Proton logging design and attached it as a patch to https://issues.apache.org/jira/browse/PROTON-343. I think the next steps are: - Gather comments from folks about the design. - Sketch out the corresponding proton-c and proton-jni code. I'd appreciate assistance from someone with more proton-c familiarity for this. Please let me know your thoughts. Thanks, Phil On 5 June 2013 15:27, Phil Harvey p...@philharveyonline.com wrote: An interesting discussion about logging has emerged from the mailing thread AMQP 1.0 JMS client - supplementary coding standards. I'm starting a new thread for this specific topic and am including the proton list. To recap, Rob, Rajith, Rafi and Gordon have expressed a desire for Proton and the new JMS client to use a custom logging facade, rather than directly calling log4j, slf4j etc. The Proton logging facade would work consistently across proton-c and proton-j. I think the case for adopting this approach is overwhelming, but am interested in views on the best implementation. *=== Proton ===* * * I added a diagram to the wiki illustrating how this might work for proton-j. It's not finished, but I thought it useful to share it early to stimulate discussion. Hopefully the implied proton-c equivalent is fairly obvious. https://cwiki.apache.org/confluence/display/qpid/Proton+Logging I'm not sure what would go into ProtonOperationalLogger at the moment (Rob/Rafi may know), but want to leave the door open to separating Proton-specific methods from general purpose log(Level, String) kind of stuff. It does at least give us a place to define the behaviour of the public logging API that Rob referred to, and which would behave the same as its proton-c counterpart. To me, the Logger interface in the diagram looks very similar to the Qpid Java Broker's RootMessageLogger. Proton *may* use it directly for debug logging. *=== JMS Client ===* * * Turning to the JMS client, my initial preference would be to create interfaces JmsOperationalLogger and JmsLogger corresponding to the Proton ones. The JMS Client would pass to Proton a ProtonLogger implementation that simply wraps its JmsLogger. Alternatively we could create a Logger interface in a central sub-project and use it in both Proton and the JMS Client, but I suspect that will involve more re-jigging of our project structure than we currently have appetite for. Comments/criticisms etc welcomed. I'm especially interested in whether there are proton-c-specific factors that would significantly affect our implementation. Phil
Re: Qpid-specific logging facade(s) for Proton etc
I've created a skeleton Java implementation of the Proton logging design and attached it as a patch to https://issues.apache.org/jira/browse/PROTON-343. I think the next steps are: - Gather comments from folks about the design. - Sketch out the corresponding proton-c and proton-jni code. I'd appreciate assistance from someone with more proton-c familiarity for this. Please let me know your thoughts. Thanks, Phil On 5 June 2013 15:27, Phil Harvey p...@philharveyonline.com wrote: An interesting discussion about logging has emerged from the mailing thread AMQP 1.0 JMS client - supplementary coding standards. I'm starting a new thread for this specific topic and am including the proton list. To recap, Rob, Rajith, Rafi and Gordon have expressed a desire for Proton and the new JMS client to use a custom logging facade, rather than directly calling log4j, slf4j etc. The Proton logging facade would work consistently across proton-c and proton-j. I think the case for adopting this approach is overwhelming, but am interested in views on the best implementation. *=== Proton ===* * * I added a diagram to the wiki illustrating how this might work for proton-j. It's not finished, but I thought it useful to share it early to stimulate discussion. Hopefully the implied proton-c equivalent is fairly obvious. https://cwiki.apache.org/confluence/display/qpid/Proton+Logging I'm not sure what would go into ProtonOperationalLogger at the moment (Rob/Rafi may know), but want to leave the door open to separating Proton-specific methods from general purpose log(Level, String) kind of stuff. It does at least give us a place to define the behaviour of the public logging API that Rob referred to, and which would behave the same as its proton-c counterpart. To me, the Logger interface in the diagram looks very similar to the Qpid Java Broker's RootMessageLogger. Proton *may* use it directly for debug logging. *=== JMS Client ===* * * Turning to the JMS client, my initial preference would be to create interfaces JmsOperationalLogger and JmsLogger corresponding to the Proton ones. The JMS Client would pass to Proton a ProtonLogger implementation that simply wraps its JmsLogger. Alternatively we could create a Logger interface in a central sub-project and use it in both Proton and the JMS Client, but I suspect that will involve more re-jigging of our project structure than we currently have appetite for. Comments/criticisms etc welcomed. I'm especially interested in whether there are proton-c-specific factors that would significantly affect our implementation. Phil
Qpid-specific logging facade(s) for Proton etc
An interesting discussion about logging has emerged from the mailing thread AMQP 1.0 JMS client - supplementary coding standards. I'm starting a new thread for this specific topic and am including the proton list. To recap, Rob, Rajith, Rafi and Gordon have expressed a desire for Proton and the new JMS client to use a custom logging facade, rather than directly calling log4j, slf4j etc. The Proton logging facade would work consistently across proton-c and proton-j. I think the case for adopting this approach is overwhelming, but am interested in views on the best implementation. *=== Proton ===* * * I added a diagram to the wiki illustrating how this might work for proton-j. It's not finished, but I thought it useful to share it early to stimulate discussion. Hopefully the implied proton-c equivalent is fairly obvious. https://cwiki.apache.org/confluence/display/qpid/Proton+Logging I'm not sure what would go into ProtonOperationalLogger at the moment (Rob/Rafi may know), but want to leave the door open to separating Proton-specific methods from general purpose log(Level, String) kind of stuff. It does at least give us a place to define the behaviour of the public logging API that Rob referred to, and which would behave the same as its proton-c counterpart. To me, the Logger interface in the diagram looks very similar to the Qpid Java Broker's RootMessageLogger. Proton *may* use it directly for debug logging. *=== JMS Client ===* * * Turning to the JMS client, my initial preference would be to create interfaces JmsOperationalLogger and JmsLogger corresponding to the Proton ones. The JMS Client would pass to Proton a ProtonLogger implementation that simply wraps its JmsLogger. Alternatively we could create a Logger interface in a central sub-project and use it in both Proton and the JMS Client, but I suspect that will involve more re-jigging of our project structure than we currently have appetite for. Comments/criticisms etc welcomed. I'm especially interested in whether there are proton-c-specific factors that would significantly affect our implementation. Phil
Re: proton-j Messenger tests failing on Jenkins (PROTON-295)
I initially disabled the failing test because I didn't have time to exclude it more selectively. I've just committed a change under PROTON-315 to make it skip iff we're using proton-j. It is unfortunate that the Java Messenger implementation is falling so far beind the C implementation. Sadly, I don't have the bandwidth to address this either. Phil On 16 May 2013 14:26, Rafael Schloming r...@alum.mit.edu wrote: On Thu, May 16, 2013 at 4:35 AM, Phil Harvey p...@philharveyonline.com wrote: Hi Rafi, I have Jira'd this test failure in PROTON-315 and commented out the failing test. I have initially assigned the Jira to you but you may wish to canvas for people to assist with whatever Java Messenger changes are necessary. I'm not very familiar with this code personally, but would be happy to try to assist anyway. The test passes for the C impl and was added to check for a serious regression in the C impl, so disabling it entirely is obviously not ideal. I'd suggest just skipping it for the Java impl for now if you want the tests to pass. I will note however that it's not a test for a new feature, it's simply an additional test for an existing feature, and it covers what would likely be a common usage of that feature, so having the tests actually pass without that test included still isn't that great a situation. It's not really the same as a new feature that we have yet to add to the Java code, it really is a fairly basic malfunction. Put another way, this isn't a code change that caused existing tests to fail. From the Java perspective this is simply an additional test with no code changes, one that covers a pretty basic usage scenario. Overall there is a growing chunk of work that needs to be done to the Java Messenger impl to bring it up to parity with the C side both on features and bug fixes. I'm not sure what to do about it as we don't really have a party taking ownership of that code, and I don't currently have time to learn the ins and outs of it, at least not piecemeal for isolated patches. More generally, I believe we should not commit code to trunk that doesn't pass all the tests. I just want to check that this is still our policy so please shout if you disagree. I certainly agree. I did actually check that the tests pass and I was under the impression they did, however upon further investigation my check was foiled by a number of things. - the config.sh script was never updated to find maven jars, because of this I could only run the tests via the build system rather than launching proton-test directly - the java stuff didn't build with cmake because of the bouncycastle dependency - the jni stuff *did* build (which I mistook for the non JNI tests passing) - the jni tests were falsely reported as passing, they seem to seg fault part way through, and cmake reports them passing I was only able to see the failure after downloading the bouncycastle dependency and updating the config.sh script to find the jars that cmake builds, however make test still does not appear to run a pure java profile, only the jni tests even when the pure java is actually successfully built. --Rafael
Re: Maven Deployment of 0.4 Release
Hi Hiram, Did you ever get a reply about this (e.g. on the IRC channel)? I think Rafi created the 0.4 release candidate but I don't know if there's a specific reason why it hasn't been promoted. Rafi - can you shed any light on this? I'm happy to lend a hand if anything needs doing. Phil On 20 May 2013 15:09, Hiram Chirino hi...@hiramchirino.com wrote: I don't see the 0.4 release in maven central: http://repo2.maven.org/maven2/org/apache/qpid/proton-api/ Can you guy make sure that a maven deployment is part of the release process? -- Hiram Chirino Engineering | Red Hat, Inc. hchir...@redhat.com | fusesource.com | redhat.com skype: hiramchirino | twitter: @hiramchirino blog: Hiram Chirino's Bit Mojo
Re: Proton status update 2013-05-10
- *Keith and Phil have been intermittently continuing work on the Transport API refactoring* (PROTON-225https://issues.apache.org/jira/browse/PROTON-225) for Proton-J. - They have nearly finished the Java changes. - Corresponding changes will also need to be made to the JNI classes and their Swig file. - *Ken has been working on a demo based on Messenger*. - The goal is to build a simple RPC client/server that will operate over an unreliable mesh of dispatch-routers. The goal is to achieve exactly once message delivery using only the messenger api (at the clients), over an unreliable multi-hop mesh of amqp routers. - To be clear - the routers offer no persistence, they merely forward the message and propagate back the delivery status (see qpid/extras/dispatch/router source code for all the gory details). - In doing this we hope to identify any weaknesses or missing features in the current api or documentation for messenger. - See the fortune directory in https://github.com/kgiusti/proton-tools See https://cwiki.apache.org/confluence/display/qpid/Proton+status+update+2013-05-10 . Please shout if you want to add anything. Phil On 10 May 2013 09:05, Phil Harvey p...@philharveyonline.com wrote: Hi, Please add a comment to the following page if you'd like to inform others about Proton work you've done since mid-April or about plans for upcoming work. https://cwiki.apache.org/confluence/display/qpid/Proton+status+update+2013-05-10 I'll modify the page to summarise the comments next Wednesday. Thanks, Phil
Proton status update 2013-05-10
Hi, Please add a comment to the following page if you'd like to inform others about Proton work you've done since mid-April or about plans for upcoming work. https://cwiki.apache.org/confluence/display/qpid/Proton+status+update+2013-05-10 I'll modify the page to summarise the comments next Wednesday. Thanks, Phil
proton-j Messenger tests failing on Jenkins (PROTON-295)
Hi, The following commit made on Wednesday is causing the proton-j Jenkins job to fail: *Revision 1480445http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1480445by rhs https://builds.apache.org/user/rhs/: * PROTON-295 https://issues.apache.org/jira/browse/PROTON-295: decoupled tracking of store entries from put/get of store entries, fixed tracking of incoming entries to start when they are returned via get rather than when they are read off of the wire See: https://builds.apache.org/view/M-R/view/Qpid/job/Qpid-proton-j/334/ Is anyone who is familiar with this change available to fix it please? I recommend that folks subscribe to Jenkins build notifications emails so they are aware when their commits break the build. Thanks Phil
Proton status update 2013-04-11
Main Proton progress in the last two weeks: - Phil and Keith committed the first tranche of *JUnit system tests for Connection* (PROTON-284https://issues.apache.org/jira/browse/PROTON-284). - Clarified Proton *Engine's error handling behaviour* as part of this (see mailing list discussionhttp://qpid.2158936.n2.nabble.com/Defining-the-behaviour-of-Proton-Engine-API-under-error-conditions-tt7590533.html#none ). - Phil and Keith started work on the Java implementation of the *Transport API redesign* (PROTON-225https://issues.apache.org/jira/browse/PROTON-225), i.e. the analagous API to pn_transport_push / pop / capacity / pending. - Ken plans to add *Valgrind coverage to the SSL unit tests* next. Taken from https://cwiki.apache.org/confluence/display/qpid/Proton+status+update+2013-04-11 . Phil
Proton status update 2013-04-11
Main Proton progress in the last two weeks: - Phil and Keith committed the first tranche of *JUnit system tests for Connection* (PROTON-284https://issues.apache.org/jira/browse/PROTON-284). - Clarified Proton *Engine's error handling behaviour* as part of this (see mailing list discussionhttp://qpid.2158936.n2.nabble.com/Defining-the-behaviour-of-Proton-Engine-API-under-error-conditions-tt7590533.html#none ). - Phil and Keith started work on the Java implementation of the *Transport API redesign* (PROTON-225https://issues.apache.org/jira/browse/PROTON-225), i.e. the analagous API to pn_transport_push / pop / capacity / pending. - Ken plans to add *Valgrind coverage to the SSL unit tests* next. Taken from https://cwiki.apache.org/confluence/display/qpid/Proton+status+update+2013-04-11 . Phil
Proton status update 2013-04-11
I have created the placeholder page [1] for us to note Proton progress over the last couple of weeks. If you're actively working on Proton, please could you add a brief comment to the page. On Monday I'll edit the body of the page to summarise the comments and notify this mailing list. The status updates should be a couple of sentences describing (with Jira numbers where possible): - What you did - What you are planning to do - Any blockers [1] https://cwiki.apache.org/confluence/display/qpid/Proton+status+update+2013-04-11
Now generating HTML from markdown files, and added Engine markdown document
I recently committed some changes under proton/docs as part of PROTON-280: - To play nicely with Maven's markdown-to-HTML generation, I moved docs/* into docs/markdown. - I added docs/markdown/engine/engine.md which aims to summarise the Engine concepts. There's more I'd like to add to this document but I wanted to check something in ASAP to give folks an opportunity to comment. To generate the full HTML site (in proton/target/staging/), including HTML generated from the markdown documents, run this command: mvn package site site:stage Eventually we may want to include some of this output somewhere public, e.g. on Jenkins (as Jenkins artifacts) and/or qpid.apache.org. Phil
Re: Defining the behaviour of Proton Engine API under error conditions
Thanks for the response. It does clarify the Engine's semantics, and the intended division of responsibility between the Engine and the application. I intend to document this soon in a short conceptual summary under proton/docs/engine/. I chatted to Keith about this and we're uncertain about some of the details of the steps that follow an invalid frame being pushed into the Engine. To illustrate this, we wrote the following pseudo-code for the main loop of a typical application (eg a messaging client), similar to Driver's pn_connector_process function. 1 tail_buf = pn_transport_tail() 2 tail_capacity = pn_transport_capacity() 3 read = socket_recv(tail_buf, tail_capacity) 4 # ... [1] 5 6 push_err_no = pn_transport_push(read) # see [Q1] 7 8 if (push_err_no 0) 9 socket_shutdown(SHUTDOWN_READ) 10 end if 11 12 # ... [2] 13 14 head_pending = pn_transport_pending() # see [Q2] 15 if(head_pending 0) 16head_buf = pn_transport_head() 17written = socket_send(head_buf, head_pending) 18# ... [3] 19 20pn_transport_pop(written) 21 else if (head_pending 0) 22socket_shutdown(SHUTDOWN_WRITE) 23 end if Elided sections: [1] A well-behaved application would call pn_transport_close_tail() if socket_recv() 0 [2] Application makes use of top half API - pn_session_head(), pn_work_head() etc [3] A well-behaved application would call pn_transport_close_head() if socket_send() 0 === Questions about error handling === Imagine that the bytes read from the socket on line 3 represent a valid frame followed by a frame that is invalid (e.g. because it contains a field of an unexpected datatype). In this case: [Q1] Should pn_transport_push return -1 on line 6, thereby signalling that the application can't push any more bytes into it? [Q2] On lines 14-21, what is in the transport's outgoing byte queue? We expect that it would be: frame1 corresponding to top-half API calls on line 12 frame2 ... ... the CLOSE frame triggered by the invalid input. Or maybe the CLOSE frame somehow replaces the other outgoing frames in the transport's outgoing byte queue? Note: if the application supports failover, it would subsequently unbind the transport, create a new socket, create a new transport, and bind the existing connection to it. Phil On 28 March 2013 16:25, Rafael Schloming r...@alum.mit.edu wrote: On Thu, Mar 28, 2013 at 11:16 AM, Phil Harvey p...@philharveyonline.com wrote: On 28 March 2013 13:17, Rafael Schloming r...@alum.mit.edu wrote: On Thu, Mar 28, 2013 at 5:31 AM, Rob Godfrey rob.j.godf...@gmail.com wrote: On 28 March 2013 02:45, Rafael Schloming r...@alum.mit.edu wrote: On Wed, Mar 27, 2013 at 6:34 PM, Rob Godfrey rob.j.godf...@gmail.com wrote: On 27 March 2013 21:16, Rafael Schloming r...@alum.mit.edu wrote: On Wed, Mar 27, 2013 at 11:53 AM, Keith W keith.w...@gmail.com wrote: [..snip..] [..snip..] To answer your question, say there is a framing error/the-wire is cut (really there isn't any way to know the difference since you could cut the wire half way through a frame header), the transport interface will write out the close frame as required by the spec, and it will indicate through it's error interface that an error has occurred, however it won't alter any of the local/remote states of the top half endpoints. The local states remain reflective of the local apps desired state, and the remote states remain reflective of the remote apps last known desired state. This kind of has to be this way because you don't want to confuse links being involuntarily detached because the wire was cut with the remote endpoint wanting to actively shut down the link. I find this a little confusing. After the Transport has silently sent the Close frame, what would the local Application typically do next in order to get the Engine back to a usable state? It would unbind the transport. When the transport is unbound, all the remote state is cleared, and the app is free to use the connection/endpoint data structure as if it had simply built them up into their current state explicitly via constructors as opposed to it being the result of network interactions. There is an alternative approach which I would find simpler. In this alternative. the Engine would not implicitly send the Close frame. Instead, the Application would explicitly control this by doing the following: - The Application checks the Transport's error state as usual - The Application discovers that the Transport is in an error state and therefore calls Connection.setCondition(errorDetailsObtainedFromTransport) followed by Connection.close() - The Application calls Transport.output (or pn_transport_head in proton-c), causing the Close frame bytes to be produced. As a Proton Engine developer I would find this simpler to implement. I'm
Re: Defining the behaviour of Proton Engine API under error conditions
On 28 March 2013 13:17, Rafael Schloming r...@alum.mit.edu wrote: On Thu, Mar 28, 2013 at 5:31 AM, Rob Godfrey rob.j.godf...@gmail.com wrote: On 28 March 2013 02:45, Rafael Schloming r...@alum.mit.edu wrote: On Wed, Mar 27, 2013 at 6:34 PM, Rob Godfrey rob.j.godf...@gmail.com wrote: On 27 March 2013 21:16, Rafael Schloming r...@alum.mit.edu wrote: On Wed, Mar 27, 2013 at 11:53 AM, Keith W keith.w...@gmail.com wrote: [..snip..] [..snip..] To answer your question, say there is a framing error/the-wire is cut (really there isn't any way to know the difference since you could cut the wire half way through a frame header), the transport interface will write out the close frame as required by the spec, and it will indicate through it's error interface that an error has occurred, however it won't alter any of the local/remote states of the top half endpoints. The local states remain reflective of the local apps desired state, and the remote states remain reflective of the remote apps last known desired state. This kind of has to be this way because you don't want to confuse links being involuntarily detached because the wire was cut with the remote endpoint wanting to actively shut down the link. I find this a little confusing. After the Transport has silently sent the Close frame, what would the local Application typically do next in order to get the Engine back to a usable state? There is an alternative approach which I would find simpler. In this alternative. the Engine would not implicitly send the Close frame. Instead, the Application would explicitly control this by doing the following: - The Application checks the Transport's error state as usual - The Application discovers that the Transport is in an error state and therefore calls Connection.setCondition(errorDetailsObtainedFromTransport) followed by Connection.close() - The Application calls Transport.output (or pn_transport_head in proton-c), causing the Close frame bytes to be produced. As a Proton Engine developer I would find this simpler to implement. Moreover, as a Proton Engine user, it gives me a clearer separation of responsibility between the application and the Engine. Maybe I'm just not grokking the Engine's philosophy, but on the whole the Engine API feels like it gives me control over what frames are being produced (though not *when* - and that's fine by me), so I find the idea of the Transport layer silently sending a Close rather surprising. What are people's views on this? [..snip..] --Rafael Phil
Re: Yet Another communication improvement suggestion
Thank you for all your replies. I was pleased by the positive response so I think it is worth setting something up. Here is my revised proposal. I believe this process will significantly increase visibility for all the project stakeholders with minimal effort. === Proposed status updates process === I will initially act as coordinator, with one or two other people providing cover when I'm unavailable. Every two weeks [1]: - I will create a Status Update 2013-mm-dd wiki page for that period [2] - I will send a mail to the list to prompt people to add their updates to the page as comments. - I will edit the body of the wiki page to summarise the comments [3]. The summary may also contain other interesting events, e.g. releases. - I will mail a link to the wiki page - I will update the roadmap if necessary [4]. The status updates will be one or two sentences describing (with Jira numbers where possible): - What you did - What you are planning to do - Any blockers === Notes === [1] I *think* a two week period is the right frequency, but am open to suggestions. [2] All the status updates will be child pages of an umbrella one, which will be hyperlinked from the main sidebar on the wiki. [3] I appreciate Rafi's concern about funnelling these updates through a single person. However, I don't think we can achieve a sufficiently high quality summary by simply aggregating individual updates. Nevertheless, by storing the raw updates as comments on the same wiki page as the distilled summary, it will be easy for us to tweak this process as we go along. [4] I should not be the only person updating the roadmap. Ad hoc discussions on the list should also trigger roadmap updates, by whoever makes most sense at the time. Let me know what you think. Phil On 14 March 2013 15:44, Rafael Schloming r...@alum.mit.edu wrote: I definitely agree we should make both the longer term roadmap and the things being actively worked on for the next release more visible. One frustration I've had with our communication tools has been with the wiki. I actually had quite a good experience at first. I was happy with how easy it was to author the Protocol Engines doc I wrote a little while back. Since then though I have noticed that it is very difficult to find something once you've authored it. There is no obvious way to navigate to the page when you go here: https://cwiki.apache.org/qpid/, the search box on the top doesn't seem to work well at all, and if you google proton protocol engines you actually get to the mailing list updates for the document but not the document itself. I think any process that somehow distills and summarizes the higher frequency activity from jira/lists/irc, would really need to solve and/or find a better means of publishing the info than we currently have with the wiki. I think we have a general gap in (good) tooling for low frequence/live updated material. Regarding the specific process you mention, I'd be happy to contribute to periodic status/activity updates. I would, however, prefer a more distributed process than funnelling through one person, i.e. put the updates into some kind of shared/concurrently editable thing, e.g. a wiki page or a google doc. --Rafael On Tue, Mar 12, 2013 at 1:22 PM, Phil Harvey p...@philharveyonline.com wrote: There is a lot of really exciting development being done on Proton at the moment. However, I often wish that I had better visibility of ongoing work, so that I could better complement the work others are doing. Currently, the ways I find out about this work are: - Jira updates - The mailing list - IRC There are two problems with this: (1) I only get a partial view of what's going on, and (2) stuff usually gets put on Jira and the mailing list too late, i.e. when it's already in progress or is actually finished. Also, we do have a roadmap on the wiki [1], but I don't think this is used by many people at the moment. Maybe my desire for more visibility and coordination could be viewed as rather command and control, and therefore not in the spirit of open source. I'd be interested to hear what others think about this. For the record, what I think we should introduce is: 1. A regular round-up email that gets sent to the list. Someone would be responsible for collating brief emails from developers describing what they're planning to work on, and would condense this into something useful to the general Proton community. I would be happy to perform this role. This round-up would necessarily be descriptive, not prescriptive. 2. We would commit to keeping the roadmap more up to date so that it becomes a useful resource for people wishing to work in a complementary way. I believe that most of the above points could apply to the Qpid project as a whole. But, to avoid trying to boil the ocean, I thought it would be worth testing these ideas in the narrower
Re: Why 2 space indentations??
I'm feeling a bit guilty about this one because I suspect it was my indentation-related review comments on some of Ken's recent soak test work that may have been the catalyst for this thread. We've got a few options about what to do, each bringing their own special kind of pain. I would be in favour of converting existing Proton Python files to use 4 spaces. This obviously entails the up-front pain of doing the conversion, plus some mild, occasional pain when diff-ing a file's version history that spans the re-indentation commit (you can ignore whitespace using git diff --ignore-all-space or svn diff -x --ignore-space-change). I think this is preferable to the frequent, long-lasting pain of switching between 2-space and 4-space indentation when I navigate across files within the project. Not to mention the occasional, *severe* pain of debating indentation styles on the mailing list. Surely no one is actually enjoying this discussion, and if we don't resolve it now then it's sure to come up again in six months time ;-) My second choice would be that we make all the Proton Python files use 2-space indentation. Despite my preference, ceteris paribus, for 4 spaces, this would be a less disruptive change based on Rafi's statistics. Phil On 15 March 2013 12:15, Ted Ross tr...@redhat.com wrote: +1. I've also wondered why this one codebase was written with 2-space indentation. My editors are all set up for 4-spaces so doing any work in this code is a pain. I'm in favor of converting all of it to comply with the 4-space convention. -Ted On 03/14/2013 09:18 AM, Ken Giusti wrote: Not to fire off a religious flame-war here - but this has stuck in my craw for awhile: Why is the proton C and Python code using 2 space indent? Two space indent does not conform with the existing QPID coding guidelines established for C++ nor Java: https://cwiki.apache.org/qpid/**java-coding-standards.htmlhttps://cwiki.apache.org/qpid/java-coding-standards.html https://cwiki.apache.org/qpid/**cppstyleguide.htmlhttps://cwiki.apache.org/qpid/cppstyleguide.html even python code should not be using 2 space indents, as God and Guido intended: http://www.python.org/dev/**peps/pep-0008/#indentationhttp://www.python.org/dev/peps/pep-0008/#indentation Heck - the proton-j Java code uses 4 spaces! We're not even self-consistent! Sorry to bring this up - there's plenty of real work that needs to be done for proton. But I'm OLD, and my eyes ain't what they used to be. Working with 2 space indents isn't fun. In the case of python, it's damn painful. Going forward, can we please use 4 space indents? And, over time, convert the existing codebase? -K
Re: Jira numbers in commit messages
If I see NO-JIRA then I usually infer that the author considered whether a Jira was required and decided not. Without this marker, I can't distinguish between deliberate and accidental omission of the Jira number. On 14 March 2013 14:47, Rafael Schloming r...@alum.mit.edu wrote: To be honest, I've never really understood the point of the NO-JIRA thing. What's the technical difference between NO-JIRA: blah and simply omitting the PROTON-xxx? I can't see that it would significantly improve grepability since either way you need to run a regex over the whole log string for anything that matches PROTON-[0-9]+. --Rafael On Thu, Mar 14, 2013 at 10:31 AM, Phil Harvey p...@philharveyonline.com wrote: I notice a fair smattering of recent Proton commits without Jira numbers in them. As far as I'm aware, all commits should either contain a Jira number in the format PROTON-xyz: or, for exceptionally simple changes, NO-JIRA:. Please shout if you disagree. Thanks Phil
Re: How about docs at top level?
I'm happy with the location although to increase consistency with other projects I have a mild preference for either docs or doc. The former seems to be the most common in other open source projects, and the latter is the name used by Qpid. Phil On 4 March 2013 19:53, Michael Goulish mgoul...@redhat.com wrote: I'm planning to start checking my docs into the proton tree soon. I was assuming I would just put them at top level, i.e. qpid-proton/documentation Anybody care to agree, object, counter-offer, praise, complain, argue, question, or muse ?
proton(-j) threading model
I've been working with the proton-j engine recently and want to clarify the threading model. The Proton web site [1] says Proton is architected to be usable with any threading model as well as with non threaded applications. Turning to the implementation, I've heard that the proton-j engine is intended to be used by one thread at once. This sounds reasonable, but I want to clarify what one thread at once really means in this context. I believe we should say something like this in the proton-j documentation: Proton engine classes are not synchronized. If multiple threads access a Proton engine object concurrently, external synchronization must be used. Ditto for the Message package. Any improvements, objections etc? Out of interest, what is the threading model of Messenger? And how about proton-c? Thanks, Phil [1] http://qpid.apache.org/proton/
Discrepancy between Java and C is settled methods
There's a confusing difference in the meanings of the delivery is settled methods in proton-j (Delivery.isSettled) and proton-c (pn_delivery_settled): their return values represent the local and remote values respectively. proton-j has a separate remotelySettled() method, whereas proton-c appears to have no way of accessing the local state. I'd like to modify one or both apis to resolve this semantic difference. There are clearly a number of options. My favourite is to modify the proton-c function to return the local value, and add a new function to return the remote one. This is consistent with the other functions that have local and remote counterparts, eg pn_link_source and pn_link_remote_source. If this change in proton-c api semantics is too abrupt, maybe we could just deprecate the existing function and method and add new ones that are explicit about their local/remote meaning. What do people think? Thanks, Phil
Re: Difficulties building Swig-generated C++ code for proton-jni binding (due to nested typedef union)
Hi Cliff, Thanks for looking into this. Simply setting the BUILD_WITH_CXX flag with a clean checkout doesn't expose the problem because it doesn't affect the language that Swig uses. You'd need to also: - Edit bindings/java/CMakeLists.txt to tell Swig to honour the BUILD_WITH_CXX flag and use C++ rather than C when generating the proton-jni code, like so: if (BUILD_WITH_CXX) SET_SOURCE_FILES_PROPERTIES(java.i PROPERTIES CPLUSPLUS ON) endif (BUILD_WITH_CXX) - Fix (or comment out) the offending lines in the generated code that violate C++'s stricter rules around casting void*'s etc. I'm not expecting you to actually do this ... unless you're feeling *really* keen :-) This allows the build to Swig-generate and compile the proton-jni C++ and Java code. However, the Java code that uses it fails to compile because Swig didn't generate the pn_atom_t_u Java class, due to its lack of support for nested unions in typedefs. It's all rather convoluted, but hopefully that clarifies the problem somewhat. Phil Not sure if that clears On 1 March 2013 08:47, Cliff Jansen cliffjan...@gmail.com wrote: I am trying to catch up with you to reproduce your problem. So far I can build on linux with cmake -DBUILD_WITH_CXX=ON. That works, so we know the problem is unrelated to swig itself or the switch to C++. It is either some problem with Visual Studio or a lack of strictness with g++. The warning you are seeing is from the Visual C++ compiler, and is just a warning. It is indeed a high profile suspect, but perhaps a red herring also. Right now I am trying to untangle the exact steps cmake is feeding into the vcxproj file so that I can compile the swig generated file by hand and zero in on the problem. But I don't claim to be in a better position to solve this than you. I doubt I will progress this much before the end of my day here. If you make further progress, let me know so that we can minimise duplicated effort. Cliff On Thu, Feb 28, 2013 at 11:41 AM, Phil Harvey p...@philharveyonline.com wrote: I have been working with Keith on PROTON-249 (Build fails on Win8 / VS 2012 with path error [1]). When building Proton from MS Visual Studio, we understand that a C++ (rather than C) compiler is used. We therefore tried doing a C++ build on Linux as a first step (i.e.running cmake with -DBUILD_WITH_CXX=ON), and ran into a number of problems - see PROTON-254 [2]. Most of the problems relate to the stricter rules in C++ around casting etc, and are easy to fix. However, the fact that Swig doesn't support nested unions in C++ typedef's means that it doesn't generate Java class pn_atom_t_u, which is our hand-written Java code depends on. We're interested in opinions about the best way forward, particularly from anyone who faced similar problems when building the other language bindings using C++. Thanks, Phil [1] https://issues.apache.org/jira/browse/PROTON-249 [2] https://issues.apache.org/jira/browse/PROTON-254
Difficulties building Swig-generated C++ code for proton-jni binding (due to nested typedef union)
I have been working with Keith on PROTON-249 (Build fails on Win8 / VS 2012 with path error [1]). When building Proton from MS Visual Studio, we understand that a C++ (rather than C) compiler is used. We therefore tried doing a C++ build on Linux as a first step (i.e.running cmake with -DBUILD_WITH_CXX=ON), and ran into a number of problems - see PROTON-254 [2]. Most of the problems relate to the stricter rules in C++ around casting etc, and are easy to fix. However, the fact that Swig doesn't support nested unions in C++ typedef's means that it doesn't generate Java class pn_atom_t_u, which is our hand-written Java code depends on. We're interested in opinions about the best way forward, particularly from anyone who faced similar problems when building the other language bindings using C++. Thanks, Phil [1] https://issues.apache.org/jira/browse/PROTON-249 [2] https://issues.apache.org/jira/browse/PROTON-254
jenkins interop test failing due to PROTON-215 commits?
Hi, I notice that proton-jni on Jenkins is failing [1] , probably due to the recent PROTON-215 commits. The failing test is: proton_tests.interop.InteropTest.test_messagehttps://builds.apache.org/view/M-R/view/Qpid/job/Qpid-proton-jni/org.apache.qpid$tests/35/testReport/junit/proton_tests.interop/InteropTest/test_message/ Is someone already looking into this? Thanks, Phil [1] https://builds.apache.org/view/M-R/view/Qpid/job/Qpid-proton-jni/lastBuild/
Re: [documentation] -- Intro to Proton
I do agree with you that having documentation committed alongside code is the right approach. I propose that we write this documentation in Markdown syntax. That gives us (or our users) the option of easily generating HTML whilst keeping the barrier to entry low for authors. I recognise that Markdown lacks the semantic richness of Docbook (used for the Qpid Broker), but I believe that's ok in this case since our documentation should be quite short (or we're doing something wrong). Phil On Feb 25, 2013 7:07 PM, Rajith Attapattu rajit...@gmail.com wrote: I'm strong believer in maintaining our docs in the source tree, as it makes it easy to release docs along side the code. Also it helps keep the docs current. The wiki based documentation in the past had many issues, the chief complaint being stale most of the time. We could look at doing something similar to the qpid docs, or we could also use this opportunity to experiment with a different approach/tool set. Rajith On Mon, Feb 25, 2013 at 1:50 PM, Michael Goulish mgoul...@redhat.com wrote: I think I will be landing it in the code tree first, and from there, I don't know. Any suggestions? In the code -- I assume it should be at the top level? i.e. a sibling of the README file? i.e. qpid-proton-0.4/pulitzer_prize_winning_documentation or something along those lines? Agree? Disagree? - Original Message - From: Phil Harvey p...@philharveyonline.com To: proton@qpid.apache.org Sent: Monday, February 25, 2013 12:14:00 PM Subject: Re: [documentation] -- Intro to Proton Hi Michael, Maybe you didn't see my previous question (or maybe I didn't see your answer). Where are you intending to store this documentation? Similarly, where are you intending to publish it, e.g. as HTML and/or PDF on our web site, as a wiki page etc? Thanks Phil On 25 February 2013 16:15, Michael Goulish mgoul...@redhat.com wrote: Here's the introduction I'm planning on. If anyone has any opinions, I'd be happy to get them -- is there too much detail for a quick intro? Too little? A crucial bit I left out? Something I got wrong? ## Introduction to Proton === The Messenger interface is a simple, high-level API that lets you create point-to-point messaging applications quickly and easily. The interface offers four main categories of functionality. Messenger Operation --- There are only a few operations that are not directly concerning with message transmission. A messenger can be created, named, and freed. It can be started and stopped, and it can be checked for errors after any operation. Sending and Receiving --- Both sending and receiving happen in two stages, the inner stage moving the message between your application and a queue, the outer stage transmitting messages between your queues and remote messaging nodes. By changing the ratio of transmissions to queue transfers, you can optimize your messaging application for message latency or for overall throughput. Subscriptions control what sources your messenger can receive from, and what sources it can send to. Your messenger subscribes to the sources you want to receive from, while your outgoing messages will be received by messengers that have subscribed to your outgoing address. Message Disposition --- When you receive messages, you must either accept or reject them. You can either configure your messenger to automatically accept all messages that you get, or you can exercise finer control over message acceptance and rejection, individually or in groups. Trackers and Windows let you set or check the disposition of messages in groups. Applying the disposition operations to groups of messages can improve your system's throughput. When receiving messages, you can create a tracker for the most recently received message, and later use that tracker to accept or reject all messages up to (and including) that one. When sending messages, you can create a tracker for your most recently sent message, and later use it to inquire about the remote disposition of all sent messages up to that point. If you don't want to let a receiver make you wait forever to see what he's going to do, you can set a timeout that will control how long he can take making up his mind. By using incoming and outgoing Windows, you can limit the number of messages that these operations affect. Security --- The messenger
Re: 0.4 RC2
OK. I've modified JNIMessenger to throw ProtonUnsupportedOperationException so testSendBogus skips rather than fails when using the proton-jni profile. I believe there is still an issue with the pure Java implementation too because this test fails on my dev machine (though not on the Apache Jenkins job), therefore I'm leaving PROTON-214 open for now. I'm easy about whether we mention this stuff in the release notes. Since PROTON-214 is still open, maybe the existence of that Jira is enough to flag this deficiency. On 21 February 2013 21:42, Rafael Schloming r...@alum.mit.edu wrote: On Thu, Feb 21, 2013 at 10:53 AM, Rafael Schloming r...@alum.mit.edu wrote: On Wed, Feb 20, 2013 at 2:43 PM, Phil Harvey p...@philharveyonline.com wrote: Mostly looks good. One test is failing when run using the Java JNI binding - see below. Tested: - Download tarball - cmake, make, make install - Observed .so and .jar files installed to correct locations - Ran ./tests/python/proton-test - Ran mvn package test and observed all tests passing - Ran mvn -P proton-jni test - *FAILED* test proton_tests.messenger.MessengerTest.testSendBogus. I think this is existing issue PROTON-214 [1]. I don't know if this represents a serious functional problem but generally we wouldn't want to release something that fails any tests. I looked into this a little bit. The test is checking that using messenger to send to an address that contains an unresolvable domain name will actually fail. This appears to work fine with pure Java messenger, fine with pure C messenger, but for some reason doesn't work with Java using C via JNI. This is a bit of a mystery to me as I believe the networking code is exactly the same in both cases. This needs a bit more investigation to figure out what is going on, but given how isolated this is I'd prefer not to hold up the release if we don't turn up anything fruitful soon. Ok, mystery solved. My assumption was that the profile was running the Java messenger with a JNI wrapped engine. It's actually JNI wrapped the entire C messenger impl and JNIMessenger.java doesn't actually check any return codes, e.g.: @Override public void put(final Message message) throws MessengerException { SWIGTYPE_p_pn_message_t message_t = (message instanceof JNIMessage) ? ((JNIMessage)message).getImpl() : convertMessage(message); int err = Proton.pn_messenger_put(_impl, message_t); //TODO - error handling } As this isn't really fixable in an 0.4 timeframe, I think we should just release note this issue. --Rafael
Re: example of proton documentation
Hi Michael, Looks like a good start. Where are you intending to store this documentation? Similarly, where are you intending to publish it, e.g. as HTML and/or PDF on our web site, as a wiki page etc? I'm particularly interested in this topic because I'm focusing on the lower level Engine layer at the moment and might write some documentation for it, so I'll want it to be complementary to your Messenger docs. Phil On 22 February 2013 10:14, Michael Goulish mgoul...@redhat.com wrote: I wonder if anybody out there in Proton Land has an opinion about this little piece of Proton doc. This is an example of the documentation I'm creating as I use the Proton interface to I'd be interested to hear about 1. correctness 2. clarity 3. focus 4. usefulness 5. anything else that strikes you I don't know what to call this -- a mini-tutorial? -- a 'topic'? -- but in any case, I expect I will have 5 or 6 pieces like this for the Proton Messenger doc when I'm done, so I wanted to See What You Think. Yes, *you* ! - everything below this point is the doc - Controlling Message Flow with Credit = To control the flow of messages into your Proton application, use the second argument to pn_messenger_recv ( messenger, credit ); If you set credit to a positive value, it will limit the number of messages that pn_messenger_recv enqueues for you on that call. The number you provide is a maximum. The call to pn_messenger_recv may also enqueue fewer messages, or none. You can learn how many were received with: pn_messenger_incoming ( messenger ); You can then dequeue each message, one at a time, into your application with successive calls to pn_messenger_get ( messenger ); A typical pattern --- int i, incoming; pn_messenger_recv ( messenger, credit ); incoming = pn_messenger_incoming ( messenger ); for ( i = 0; i incoming; ++ i ) { pn_messenger_get ( messenger, message ); consume_message ( message ); } Infinite Credit -- You can also grant 'infinite' credit by using a negative value as the second arg to pn_messenger_recv(). This will have the effect of granting 10 units of credit to every link on that call to pn_messenger_recv(). A single messenger, listening on a single port, may have many incoming links. Credit does not drain -- Once granted by a call to pn_messenger_recv(), unusued credit on a link does not go away when control returns from pn_messenger_recv(). It remains at the link, and successive calls can increase it.
Re: 0.4 RC2
Mostly looks good. One test is failing when run using the Java JNI binding - see below. Tested: - Download tarball - cmake, make, make install - Observed .so and .jar files installed to correct locations - Ran ./tests/python/proton-test - Ran mvn package test and observed all tests passing - Ran mvn -P proton-jni test - *FAILED* test proton_tests.messenger.MessengerTest.testSendBogus. I think this is existing issue PROTON-214 [1]. I don't know if this represents a serious functional problem but generally we wouldn't want to release something that fails any tests. - Created and ran a simple Maven project that depends on proton-api and proton-j-impl 0.4 in the orgapacheqpid-280/https://repository.apache.org/content/repositories/orgapacheqpid-280/ repo. Environment: - Linux Mint 12, x86_64 - Java: Oracle 1.6.0_29 - cmake version 2.8.5 - SWIG Version 1.3.40 - Python 2.7.2+ Phil [1] https://issues.apache.org/jira/browse/PROTON-214 On 20 February 2013 20:31, Rafael Schloming r...@alum.mit.edu wrote: Source posted here: - http://people.apache.org/~rhs/qpid-proton-0.4rc2/ Java binaries here: - https://repository.apache.org/content/repositories/orgapacheqpid-280/ I'd like to call an official vote soon, e.g. tomorrow, so please have a look and share your results here. Changes from RC1: PROTON-199: support python 2.4 PROTON-236 freegetopt compat for proton-c windows builds. PROTON-243: fixed LIB_SUFFIX magic PROTON-200: maintain a minimum credit level for each receive link PROTON-200: make similar changes to Java MessengerImpl PROTON-200: allow recv(-1) to grant unlimited credit PROTON-232: described arrays seem to force the descriptor to be of the same type as the array Fixed bug in code.c pn_data_encode_node: was always using the parent-type for everything inside an array, including the descriptor. PROTON-242: Shared library used JNI code has poor name libproton-swig.so Shared library now has name libproton-jni.so i.e. follows the naming conventions of its companion jar. PROTON-217: cmake build system should include install target for Java binaries --Rafael
Parallel Maven and CMake build systems for Proton
During the review [1] of PROTON-238, Alan made the following, not-entirely-unreasonable-sounding comment: Having 2 parllel build systems is a serious pain, as qpid demonstrates. Wouldn't it be better to leave maven for Java and cmake for everything else? People who want to build Java probably can get maven. (Alan: please forgive me if I've taken this of context) I don't particularly want to open this can of worms again, but I think it is worth addressing this question. I believe the conclusions we reached in PROTON-194 were: - Our CMake build system will be capable of building and testing everything (both C and Java). It is required because some users who wish to build Proton don't have Maven access. - Our Maven build system will be retained because it is a more standard build tool for Java developers. I acknowledge that maintaining both build systems is an annoying duplication of effort. However, our requirements are to provide a convenient build system for all our users so we have no choice. Please shout if you disagree. Phil [1] https://reviews.apache.org/r/9433/
proton interop tests failed on Jenkins
I notice that the proton-jni Jenkins job has failed: https://builds.apache.org/view/M-R/view/Qpid/job/Qpid-proton-jni/25/ The commit that broke this build was probably this one for PROTON-215, by aconway: PROTON-215: Add tests to verify AMQP type support for python bindings. http://svn.apache.org/viewvc?view=revisionrevision=1445920 Any volunteers to look into this? Phil
Re: Running the java tests
Hi Alan, You can use the test system property, as described here: http://maven.apache.org/surefire/maven-surefire-plugin/examples/single-test.html If I remember correctly, the special pattern property mentioned you're seeing in the pom is a non-standard one we added to tell JythonTest to run a subset of the Python tests. Note that there are not very many Java Junit tests yet, but Keith and I are currently looking to add some more. Phil On Feb 14, 2013 9:51 PM, Alan Conway acon...@redhat.com wrote: Anyone know how to run a subset of proton's Java tests? The pom.xml explains how to do it for the jython test but not for other Java tests.
Re: transport interface changes
FYI I've raised PROTON-225 to cover the API redesign so that we have the option of implementing it separately from the PROTON-222 bug fix. On 7 February 2013 16:43, Phil Harvey p...@philharveyonline.com wrote: My 2 cents on the naming issue: I'm not convinced that a single queue is the best metaphor for the Transport, even if qualified by the term transforming. The meaning of the input and output data is surely so different that calling it a queue masks the essence of what the engine does. To me, a transforming queue suggests something that spits out something semantically identical to its input. For example, a byte queue whose head is a UTF-8-encoded transformation of its UTF-8 tail. I don't think Transport falls into this category, therefore my preference would be for the words input and output to appear in the function names. Phil On 7 February 2013 14:23, Ken Giusti kgiu...@redhat.com wrote: What we've got here is failure to communicate. There aren't necessarily distinct input/output buffer objects, rather the whole transport interface itself is really just single structure (a [transforming] byte queue) with push/peek/pop corresponding exactly to any standard queue interface. Aha! Well, that explains it - I've always though that the transport was composed of two separate buffers - one for input, the other for output. At least, that's my interpretation of the existing API. A transforming byte queue didn't immediately pop into my mind when reading these new APIs. You may want to add a bit of documentation to that patch explaining this meme before the APIs are described. Would be quite useful to anyone attempting to implement a driver. -K - Original Message - Looks like the attachement didn't make it. Here's the link to the patch on JIRA: https://issues.apache.org/jira/secure/attachment/12568408/transport.patch --Rafael On Thu, Feb 7, 2013 at 8:10 AM, Rafael Schloming r...@alum.mit.edu wrote: Here's a patch to engine.h with updated docs/naming. I've documented what I believe would be future lifecycle semantics, however as I said before I think an initial patch would need to be more conservative. I think these names would also work with input/output prefixes, although as the interface now pretty much exactly matches a circular buffer (i.e. a byte queue), I find the input/output prefixes a bit jarring. --Rafael On Thu, Feb 7, 2013 at 5:53 AM, Rafael Schloming r...@alum.mit.edu wrote: On Wed, Feb 6, 2013 at 5:12 PM, Rafael Schloming r...@alum.mit.eduwrote: On Wed, Feb 6, 2013 at 2:13 PM, Ken Giusti kgiu...@redhat.com wrote: Rafi, I agree with the rational behind these changes. /** Return a pointer to a transport's input buffer. This pointer may * change when calls into the engine are made. I think we'll need to be a little more liberal with the lifecycle guarantee of these buffer pointers. Drivers based on completion models (rather than sockets) could be forced to do data copies rather than supplying the pointer directly to the completion mechanism. That sentence was actually supposed to be deleted. The sentences after that describes the intended lifecycle policy for the input buffer: Calls to ::pn_transport_push may change the value of this pointer and the amount of free space reported by ::pn_transport_capacity. Could we instead guarantee that pointers (and space) returned from the transport remain valid until 1) the transport is released or 2) the push/pop call is made against the transport? That is in fact what I intended for push. For pop this would place a lot more restrictions on the engine implementation. Say for example peek is called and then the top half is used to write message data. Ideally there should actually be more data to write over the network, which means that the transport may want to grow (realloc) the output buffer, and this of course is more complex if the external pointer needs to stay valid. Given that at worst this will incur an extra copy that is equivalent to what we're currently doing, I figured it would be safer to start out with more conservative semantics. We can always relax them later when we have had more time to consider the implementation. Or perhaps a reference count-based interface would be safer? Once the driver determines there is capacity/pending, it reserves the buffer and is required to push/pop to release it? Oh, and those names absolutely stink. ;) It's unclear from the function names what components of the transport they are affecting. I'd rather something more readable: pn_transport_capacity() -- pn_transport_input_capacity() pn_transport_buffer() - pn_transport_input_buffer
Re: transport interface changes
Thanks for the clear description Rafi. The modified interface sounds reasonable, notwithstanding the resolution of the questions about naming, and of the valid lifespan of the input/output pointers. We're currently looking at the impact on proton-j and will respond more fully once we've got a better understanding of it. I believe there has been some brief discussion off-list of how the corresponding Java Transport API would look. This will be implemented both in pure Java and via JNI calls to proton-c. I've summarised my second-hand understanding of the proposed Java API below. interface Transport { /** * Like pn_transport_buffer. * The ByteBuffer.remaining() method obviates the need for a pn_transport_capacity equivalent */ ByteBuffer getInputBuffer(); /** like push. No need for corresponding size parameter because the input buffer's position() implies it */ int processInput(); /** like pn_transport_peek */ ByteBuffer getOutputBuffer(); /** like pop */ int outputWritten(); /** ... deprecated existing input/output methods */ } Phil On 6 February 2013 17:14, Rafael Schloming r...@alum.mit.edu wrote: A recent bug (PROTON-222) has exposed an issue with the transport interface. The details of the bug are documented in the JIRA, but the basic issue is that given the current transport interface there is no way for an application to discover via the engine interface when data has been actually written over the wire vs just sitting there waiting to be written over the wire. Note that by written over the wire I mean copied from an in-process buffer that will vanish as soon as the process exits, to a kernel buffer that will continue to (re)-transmit the data for as long as the machine is alive, connected to the network, and the remote endpoint is still listening. To understand the issue, imagine implementing a driver for the current transport interface. It might allocate a buffer of 1K and then call pn_transport_output to fill that buffer. The transport might then put say 750 bytes of data into that buffer. Now imagine what happens if the driver can only write 500 of those bytes to the socket. This will leave the driver buffering 250 bytes. The engine of course has no way of knowing this and can only assume that all 750 bytes will get written out. A somewhat related issue is the buffering ownership/strategy between the transport and the driver. There are really three basic choices here: 1) the driver owns the buffer, and the transport does no buffering 2) the transport owns the buffer, and the driver does no buffering 3) they both own their own buffers and we copy from one to the other Now the division between these isn't always static, there are hybrid strategies and what not, however it's useful to think of these basic cases. The current transport interface (pn_transport_input/output) and initial implementation was designed around option (1), the idea behind the pn_transport_output signature was that the engine could directly encode into a driver-owned buffer. This, however, turned out to introduce some unfriendly coupling. Imagine what happens in our hypothetical scenario above if the driver has a 1K buffer and the engine negotiates a 4K frame size. The engine might end up getting stuck with a frame that is too large to encode directly into the driver's buffer. So to make the interface more friendly, we modified the implementation to do buffering internally if necessary, thus ending up in some ways closer to option (3). Now the reason this buffering issue is related to PROTON-222 is that one way to allow the engine to know whether data is buffered or not is to redefine the interface around option (2), thereby allowing the engine to always have visibility into what is/isn't on the wire. This would also in some cases eliminate some of the extra copying that occurs currently due to our evolution towards option (3). Such an interface would look something like this: // deprecated and internally implemented with capacity, buffer, and push ssize_t pn_transport_input(pn_transport_t *transport, const char *bytes, size_t available); // deprecated and internally implemented with pending, peek, and pop ssize_t pn_transport_output(pn_transport_t *transport, char *bytes, size_t size); /** Report the amount of free space in a transport's input buffer. If * the engine is in an error state or has reached the end of stream * state, a negative value will be returned indication the condition. * * @param[in] transport the transport * @return the free space in the transport */ ssize_t pn_transport_capacity(pn_transport_t *transport); /** Return a pointer to a transport's input buffer. This pointer may * change when calls into the engine are made. The amount of space in * this buffer is reported by ::pn_transport_capacity. Calls
Re: proposal for docs to aid Proton adoption
Hi Michael, Thank you for doing this. There's obviously a lot of interest in Proton but since it's quite an original library some new users do find it confusing. When do you expect to finish the first draft? The reason I ask is that I know some Proton talks are happening at ApacheCon in a couple of weeks, so I was wondering what we expect to have ready by then. What format are you planning to produce the documentation in (eg dobook, or HTML, or something else)? Presumably the documentation will be uploaded to http://qpid.apache.org/proton? I got the sense from your document outline that the main audience would be users of the Messenger API. What, if anything, do you plan to write for developers who are primarily using the Engine and Transport APIs? Thanks again, Phil On 4 February 2013 20:56, Michael Goulish mgoul...@redhat.com wrote: I am working on some documentation, examples, c. to encourage easy adoption of Proton. I expect every one of you has had the experience of trying to use a new software package and not getting decent help doing so. I would like to do what I can to help delight any software person who decides to spend 5 or 10 minutes looking at Proton. Please take a look at my proposals, below, and let me know if I missed your pet peeve. -- Mick . Here is a list of the components of the documentation that I am proposing, with explanations of each. Documentation Components List { 1. Quick Start 2. Theory of Operation 3. Component Explanations 4. Riffs 5. Examples 6. Tutorials 7. Error Dictionary } Documentation Components Description { 1. Quick Start { A guide to getting up and running, from download to hello world in 10 minutes. The guide itself should be concise, and should help to diagnose any common problems you might run into. This document does not explain anything except what you need to know to get up and tunning. } 2. Theory of Operation { An overview of the components of the messenger interface, and an explanation of how those components are used, and how they interact with each other. This information will be greatly expanded upon by the Component Explanations, the Riffs, and the Examples, but this section gives you the big picture. To explain with an analogy: my car has a driver's interface which consists of an ignition switch, a steering wheel, a clutch, a stick, a gas pedal, and a brake pedal. My daughter understands the function of each one of those things separately -- but she still can't drive the car. To drive the car you also need to know how those subsystems are all expected to work together. That is a 'theory of operation'. } 3. Component Explanations { These are like a Theory of Operation, but for individual components of the Messenger interface. More detailed than the overall Theory of Operation, but more narrow. One for each of the following Messenger components: 3.1 accept modes 3.2 errors 3.3 message windows 3.4 messengers 3.5 security 3.6 subscriptions 3.7 timeouts 3.8 trackers } 4. Riffs // rename these idioms ? { A 'riff' is a series of function calls that will often be associated with each other in application code. Riffs are code-snippets, not complete running examples, but the code in them would compile and run if it were inside a complete example. Each riff also contains an explanation of why the functions should be used together in this way. Some types of riffs: These functions will often be used together, in this order. These functions are mutually exclusive. Use the output from this one as the input to that one. } 5. Examples { Complete running examples, with explanations. These are narrowly focused on the topic at hand, leaving out code that may be good practice in a real application, but may divert attention from the topic at hand. Eventually, all of these examples should be written in both C and Java. Proposed example program topics: { messaging patterns { point-to-point fanout request-reply publish-subscribe dead letter } security error handling timeouts message windows sending receiving tracker } } 6. Tutorials { Tutorials are larger than examples, and are focused on real-world aspects of messaging applications, rather than being focused on the library code. A tutorial will typically be longer than an example, will contain more explanatory text, and will show and discuss alternative approaches. Proposed tutorial topics: {
Re: transport interface changes
My 2 cents on the naming issue: I'm not convinced that a single queue is the best metaphor for the Transport, even if qualified by the term transforming. The meaning of the input and output data is surely so different that calling it a queue masks the essence of what the engine does. To me, a transforming queue suggests something that spits out something semantically identical to its input. For example, a byte queue whose head is a UTF-8-encoded transformation of its UTF-8 tail. I don't think Transport falls into this category, therefore my preference would be for the words input and output to appear in the function names. Phil On 7 February 2013 14:23, Ken Giusti kgiu...@redhat.com wrote: What we've got here is failure to communicate. There aren't necessarily distinct input/output buffer objects, rather the whole transport interface itself is really just single structure (a [transforming] byte queue) with push/peek/pop corresponding exactly to any standard queue interface. Aha! Well, that explains it - I've always though that the transport was composed of two separate buffers - one for input, the other for output. At least, that's my interpretation of the existing API. A transforming byte queue didn't immediately pop into my mind when reading these new APIs. You may want to add a bit of documentation to that patch explaining this meme before the APIs are described. Would be quite useful to anyone attempting to implement a driver. -K - Original Message - Looks like the attachement didn't make it. Here's the link to the patch on JIRA: https://issues.apache.org/jira/secure/attachment/12568408/transport.patch --Rafael On Thu, Feb 7, 2013 at 8:10 AM, Rafael Schloming r...@alum.mit.edu wrote: Here's a patch to engine.h with updated docs/naming. I've documented what I believe would be future lifecycle semantics, however as I said before I think an initial patch would need to be more conservative. I think these names would also work with input/output prefixes, although as the interface now pretty much exactly matches a circular buffer (i.e. a byte queue), I find the input/output prefixes a bit jarring. --Rafael On Thu, Feb 7, 2013 at 5:53 AM, Rafael Schloming r...@alum.mit.edu wrote: On Wed, Feb 6, 2013 at 5:12 PM, Rafael Schloming r...@alum.mit.eduwrote: On Wed, Feb 6, 2013 at 2:13 PM, Ken Giusti kgiu...@redhat.com wrote: Rafi, I agree with the rational behind these changes. /** Return a pointer to a transport's input buffer. This pointer may * change when calls into the engine are made. I think we'll need to be a little more liberal with the lifecycle guarantee of these buffer pointers. Drivers based on completion models (rather than sockets) could be forced to do data copies rather than supplying the pointer directly to the completion mechanism. That sentence was actually supposed to be deleted. The sentences after that describes the intended lifecycle policy for the input buffer: Calls to ::pn_transport_push may change the value of this pointer and the amount of free space reported by ::pn_transport_capacity. Could we instead guarantee that pointers (and space) returned from the transport remain valid until 1) the transport is released or 2) the push/pop call is made against the transport? That is in fact what I intended for push. For pop this would place a lot more restrictions on the engine implementation. Say for example peek is called and then the top half is used to write message data. Ideally there should actually be more data to write over the network, which means that the transport may want to grow (realloc) the output buffer, and this of course is more complex if the external pointer needs to stay valid. Given that at worst this will incur an extra copy that is equivalent to what we're currently doing, I figured it would be safer to start out with more conservative semantics. We can always relax them later when we have had more time to consider the implementation. Or perhaps a reference count-based interface would be safer? Once the driver determines there is capacity/pending, it reserves the buffer and is required to push/pop to release it? Oh, and those names absolutely stink. ;) It's unclear from the function names what components of the transport they are affecting. I'd rather something more readable: pn_transport_capacity() -- pn_transport_input_capacity() pn_transport_buffer() - pn_transport_input_buffer() pn_transport_push() -- pn_transport_input_written() I think your names (and my documentation) actually suffer from exposing too much of the implementation. There aren't necessarily distinct input/output
Re: proton bindings installation paths
Thanks Darryl, that makes sense. Phil On 30 January 2013 15:00, Darryl L. Pierce dpie...@redhat.com wrote: On Wed, Jan 30, 2013 at 02:53:17PM +, Phil Harvey wrote: I've been looking at where make install writes files to, and came across something that seems like a wrinkle. I'd like to get views on whether it's worth fixing. You can control the destination of most files using the CMAKE_INSTALL_PREFIX property. However, it looks like the bindings CMakeLists.txt files don't respect this. For example, the Python bindings are installed to wherever the Python expression get_python_lib evaluates to. It looks like the Perl bindings use a similarly non-overrideable location. This is a minor irritation because I'd like to test make install without affecting shared folders under /usr. I'll raise a Jira to fix this... unless there are any objections? We had a discussion on the list a while back about this topic [1]. The decision was that we would install language bindings into locations specified by the individual language environments. If you want to install things to a separate location, you need to set the DESTDIR variable when running making install in order to install the entire set to an alternate location. [1] http://mail-archives.apache.org/mod_mbox/qpid-proton/201212.mbox/%3CJIRA.12623970.1355492653982.7193.1355492773169@arcas%3E -- Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc. Delivering value year after year. Red Hat ranks #1 in value among software vendors. http://www.redhat.com/promo/vendor/
Proton wiki pages and diagrams
I've created a Proton page on Confluence [1] and moved all existing Proton pages underneath it. One of these is a Proton Testing page [2], which contains a diagram illustrating the Python, proton-c, proton-j and JNI layers involved. I used the Gliffy Confluence plugin to draw the diagram. As we lack a common, platform neutral drawing package I quite like the Gliffy approach. I'd be interested to hear views on this, and on the content of these wiki pages. Phil [1] https://cwiki.apache.org/confluence/display/qpid/Proton [2] https://cwiki.apache.org/confluence/display/qpid/Proton+Testing
Re: Proton wiki pages and diagrams
Wow, it's been a long time since I worked on a software diagram that was edited by multiple people :) With classic desktop tools like MS Visio, I find that diagrams tend to be owned by specific individuals. I'm hopeful that the Gliffy diagrams will, to a greater extent, reside in the commons. Phil On Jan 30, 2013 6:10 PM, Rafael Schloming r...@alum.mit.edu wrote: I updated the testing diagram to include the python and C code that swig generates. This was mostly just as an excuse to play with the tool. I haven't used it before, but so far it seems promising. I think we could definitely benefit from using it more. --Rafael On Wed, Jan 30, 2013 at 12:32 PM, Phil Harvey p...@philharveyonline.com wrote: I've created a Proton page on Confluence [1] and moved all existing Proton pages underneath it. One of these is a Proton Testing page [2], which contains a diagram illustrating the Python, proton-c, proton-j and JNI layers involved. I used the Gliffy Confluence plugin to draw the diagram. As we lack a common, platform neutral drawing package I quite like the Gliffy approach. I'd be interested to hear views on this, and on the content of these wiki pages. Phil [1] https://cwiki.apache.org/confluence/display/qpid/Proton [2] https://cwiki.apache.org/confluence/display/qpid/Proton+Testing
Re: Reducing the visibility of proton-j constructors
When I asked the original question I had been assuming that the contrib modules were intended to be using the proton-api interfaces, but had to resort to concrete types for tactical reasons pending a more complete API. If that assumption were true, then using factory interfaces rather than constructors would clearly be necessary. But if this assumption is false then I personally have no problem with concrete classes being instantiated and used directly. However, at the risk of putting words into Rob's mouth, I guess he may be viewing the use of concrete classes across module boundaries to be A Bad Thing that tends to lead to leaky abstractions. Whether we apply that rule on Proton is an interesting question, and is one of those areas where I care more about us being consistent than about being right. Phil On 24 January 2013 19:03, Rafael Schloming r...@alum.mit.edu wrote: On Thu, Jan 24, 2013 at 5:06 AM, Rob Godfrey rob.j.godf...@gmail.com wrote: On 23 January 2013 17:36, Phil Harvey p...@philharveyonline.com wrote: As part of the Proton JNI work, I would like to remove all calls to proton-j implementation constructors from client code. I intend that factories will be used instead [1], thereby abstracting away whether the implementation is pure Java or proton-c-via-JNI. I'd like to check that folks are happy with me making this change, and to mention a couple of problems I've had. In this context, client code is anything outside the current sub-component, where our sub-components are Engine, Codec, Driver, Message and Messenger, plus each of the contrib modules, and of course third party code. To enforce this abstraction, I am planning to make the constructors of the affected classes package-private where possible. I believe that, although third party projects might already be calling these constructors, it is acceptable for us to change its public API in this manner while Proton is such a young project. +1 to all of the above Please shout if you disagree with any of the above. Now, onto my problem. I started off with the org.apache.qpid.proton.engine. impl package, and found that o.a.q.p.hawtdispatch.impl.AmqpTransport calls various methods on ConnectionImpl and TransportImpl, so simply using a Connection and Transport will not work. I don't know what to do about this, and would welcome people's opinions. So, the simplest change would be to change the factories to use covariant return types e.g. EngingFactoryImpl becomes: @Override public ConnectionImpl createConnection() { return new ConnectionImpl(); } @Override public TransportImpl createTransport() { return new TransportImpl(); } ... etc Code that requires the extended functionality offered by the pure Java implementation can thus instantiate the desired Factory directly. What's the point of going through the factory in this scenario rather than directly instantiating the classes as Hiram suggests? Is there some class of thing the factory would/could do that the constructor can't/shouldn't? A second refinement might be to actually separate out the interface and implementation within the pure Java implementation so that we have a well defined extended Java API. This interface could then be the return type of the factory. Maybe I'm misunderstanding, but what's the point of using an interface here if you're still locked into the pure Java impl? Are you expecting to swap out that impl under some circumstances? --Rafael
Reducing the visibility of proton-j constructors
As part of the Proton JNI work, I would like to remove all calls to proton-j implementation constructors from client code. I intend that factories will be used instead [1], thereby abstracting away whether the implementation is pure Java or proton-c-via-JNI. I'd like to check that folks are happy with me making this change, and to mention a couple of problems I've had. In this context, client code is anything outside the current sub-component, where our sub-components are Engine, Codec, Driver, Message and Messenger, plus each of the contrib modules, and of course third party code. To enforce this abstraction, I am planning to make the constructors of the affected classes package-private where possible. I believe that, although third party projects might already be calling these constructors, it is acceptable for us to change its public API in this manner while Proton is such a young project. Please shout if you disagree with any of the above. Now, onto my problem. I started off with the org.apache.qpid.proton.engine. impl package, and found that o.a.q.p.hawtdispatch.impl.AmqpTransport calls various methods on ConnectionImpl and TransportImpl, so simply using a Connection and Transport will not work. I don't know what to do about this, and would welcome people's opinions. Thanks Phil [1] for example, these work-in-progress classes: https://svn.apache.org/repos/asf/qpid/proton/branches/jni-binding/proton-j/proton-api/src/main/java/org/apache/qpid/proton/ProtonFactoryLoader.java https://svn.apache.org/repos/asf/qpid/proton/branches/jni-binding/proton-j/proton-api/src/main/java/org/apache/qpid/proton/engine/EngineFactory.java https://svn.apache.org/repos/asf/qpid/proton/branches/jni-binding/proton-j/proton/src/main/java/org/apache/qpid/proton/engine/impl/EngineFactoryImpl.java https://svn.apache.org/repos/asf/qpid/proton/branches/jni-binding/proton-c/bindings/java/jni/src/main/java/org/apache/qpid/proton/engine/jni/JNIEngineFactory.java
Re: Changing the Proton build system to accommodate jni bindings
It sounds like we're still a little way away from reaching a consensus. As a step towards this, I would like to clarify the relative priority of the various requirements that have come up. I've therefore created a page on the wiki that lists them, with a child page briefly describing the various proposals. https://cwiki.apache.org/confluence/display/qpid/Proton+build+system+requirements What are people's views on the relative priority of these requirements? Are there any I've missed? I think answering these questions is a prerequisite for agreeing the technical solution. Phil On 22 January 2013 13:34, Rob Godfrey rob.j.godf...@gmail.com wrote: On 22 January 2013 13:47, Rafael Schloming r...@alum.mit.edu wrote: On Tue, Jan 22, 2013 at 4:22 AM, Rob Godfrey rob.j.godf...@gmail.com wrote: On 21 January 2013 18:05, Rafael Schloming r...@alum.mit.edu wrote: On Mon, Jan 21, 2013 at 9:33 AM, Rob Godfrey rob.j.godf...@gmail.com wrote: Ummm... it's a dependency... you're familiar with those, yeah? The same way that the Qpid JMS clients depend on a JMS API jar, for which the source is readily available from another source. The JNI binding would build if the dependency was installed. The same way I believe the SSL code in the core of proton-c builds if the dependency for it is installed. That's not really a proper analogy. Again the JMS interfaces are defined outside of qpid. We don't release them, and we depend only on a well defined version of them, we don't share a release cycle with them. If the JMS API was something that we developed/defined right alongside the impl and was part of the same release process, we would certainly not be allowed to release without the source. This releasing without the source is a complete red herring and you know it. The source is released in whichever scheme we settle upon. If you want an example of dependencies within the qpid project, how did the AMQP 1.0 work on the C++ broker get released for 0.20? Did all the proton source get released with the C++ Broker / client? In the future are you expecting every part of the Qpid project which depends on proton to include its full source? If yes then how is the source tree going to work - is everything to be a subdirectory of proton-c? Again that's not really the same. If the Java API where on a separate (staggered) release cycle and the dependency was on a specific version, then that would be the same, but for what we're discussing, it really isn't. Proton and the cpp broker live under different trunks and branch/release separately, as far as I know this is not what you're proposing for the Java API, it is to live under the same trunk and branch/release together. The point was that the source code doesn't need to be in the same tarball let alone the same subdirectory in source control. If one considers that the Java API is a dependency then whether it is released concurrently or not with the JNI binding is moot. I've already said that it is preferable to have the source within the same tarball for the source release, but if needs be then I can live with the strict dependency view of things. I agree that having the source for the version of the Java API included in the source release bundle is advantageous. But if the collective decision is that we have taken a religious position that the source tarballs can only be svn exports of subdirectories of our source tree, then my preference would be to use separated dependencies over duplication in the repository. Personally I would think that having a more flexible policy on constructing the release source tarballs would make a lot more sense. You can call it religious if you like, but I don't think there is anything invalid about wanting to keep a simple mapping between release artifacts and proton developer environment. In the past we have had quite direct experience of exactly this factor contributing to very poor out of the box experience for users. Correct me if I'm wrong, but I believe you yourself have actually advocated (or at least agreed with) this position in the past. That said, I don't think I'm asking for us to be entirely inflexible in that regard. There really are two opposing concerns here, one being the user experience for our release artifacts, and the other being the convenience of the development process for proton developers. I actually think there are three perspectives here. The user experience of our release artefacts, the committer experience of working on the checkedout codebase, and the release manager view of preparing the release aretfacts from the source control. All I'm asking is that we recognize that there is a real tradeoff and be willing to explore options that might preserve
Re: Changing the Proton build system to accommodate jni bindings
I worked with Keith on this proposal so I should state up front that I'm not coming to this debate from a neutral standpoint. Hopefully we can find a solution that is acceptable to everyone. To this end, we listed our understanding of the requirements on https://issues.apache.org/jira/browse/PROTON-194. I'm hoping that this discussion will allow us to clarify our requirements, such that the best technical solution naturally follows. I've added some comments in-line below... On 18 January 2013 19:29, Rafael Schloming r...@alum.mit.edu wrote: On Fri, Jan 18, 2013 at 11:17 AM, Keith W keith.w...@gmail.com wrote: We are currently in the process of implementing the proton-jni binding for the proton-c library that implements the Java Proton-API, allow Java users to choose the C based proton stack if they wish. This work is being performed on the jni-branch under PROTON-192 (for the JNI work) and PROTON-194 (for the build system changes). Currently, Proton has two independent build systems: one for the proton-c and its ruby/perl/python/php bindings (based on Cmake/Make), and second a separate build system for proton-j (based on Maven). As proton-jni will cut across both technology areas, non trivial changes are required to both build systems. The nub of the problem is the sharing of the Java Proton-API between both proton-c and proton-j trees. Solutions based on svn-external and a simple tree copy have been considered and discussed at length on conference calls. We have identified drawbacks in both solutions. To be honest I don't think we've sufficiently explored the copy option. While its true there were a lot of hypothetical issues thrown around on the calls, many of them have quite reasonable solutions that may well be less work than the alternatives. In my experience, maintaining two copies of any code is usually a bad thing. However, I try to be open minded so I agree that it's worth exploring this option. I'd be interested to hear your opinion on (a) the scenarios when it would be acceptable for these two copies to diverge and (b) the mechanism you're envisaging for achieving convergence. I imagine there are both technical and process dimensions to making this work. This email proposes another solution. The hope is that this proposal can be developed on list into a solution that is acceptable to all. Proposal: Move the Java Proton-API to the top level so that it can be shared simply and conveniently by both proton-j and proton-c. * Maven builds the proton-api JAR to a well known location * Cmake/make builds proton-c and all bindings including java. As the building of the java binding requires the Java Proton API, it is optional and only takes place if proton-api has been previously created by Maven (or found by other means). * Maven builds of proton-j * Maven runs the system tests against either proton-c or proton-j. The system tests are currently written in Python but are being augmented with new ones written in Java. Proposed Directory Structure: proton |-- release.sh/bat # Builds, tests and packages proton-c and proton-j |-- pom.xml | |-- proton-api # Java Proton-API | |-- pom.xml # Will create proton-api.jar at a well known location in tree | `-- main | |-- proton-c# Proton-C and Proton-C bindings | |-- CMakeLists.txt | `-- bindings | |-- CMakeLists.txt | `-- java | |-- CMakeLists.txt | `-- jni | `-- CMakeLists.txt # Creates proton-jni.jar using proton-api.jar from a well known | # location in tree or skip if jar cannot be found | |-- proton-j# Proton-J | |-- pom.xml # Creates proton-j.jar using proton-api.jar (found via Maven) | `-- src | `-- main | `-- tests # Python and Java based system tests that test equally Proton-C and | # Proton-J. |-- pom.xml `-- src `-- test Use cases: usecase #1 - Proton-C Developer exclusively focused on Proton-C This developer may choose to check out the proton-c subtree. The build tool set remains unchanged from today i.e. cmake and make. By default, all bindings will be built expect for the java bindings (as Cmake would fail to find the proton-api.jar). For flexibility, we would include option to have cmake search another directory allowing proton-api.jar to be found in non-standard locations. usecase #2 - Proton-C Developer who wishes to run all system tests This developer must check out the complete proton tree. The build tool set now includes maven in order to build the proton-api and run the complete system test suite. Typical commands used by this developer would
Re: [VOTE] 0.3 RC3
+1 Looks good to me. I used Maven to fetch the Java api and implementation jars, and ran the Python tests against them. All tests passed. I also eyeballed the jar contents including the MANIFEST.mf files and everything looked sensible. Phil On 9 January 2013 02:53, Rafael Schloming r...@alum.mit.edu wrote: Source is here: http://people.apache.org/~rhs/qpid-proton-0.3rc3/ Java binaries are here: https://repository.apache.org/content/repositories/orgapacheqpid-118/ Fixes since RC2 include: - messenger now reports aborted connections - tarball for ruby gem generation - ssl fix (PROTON-171) --Rafael
Re: inconsistent proton library names?
Hi Rob, I believe we're thinking along the same lines. The ServiceLoader approach does indeed only affect which implementation you get by default. We will also allow the client to explicitly choose their implementation if they wish, and there will be no problem with both implmentations being used in the same proccess (this will be handy for writing interoperability tests). Phil On 7 January 2013 08:37, Rob Godfrey rob.j.godf...@gmail.com wrote: I've not looked at the branch lately (only just back from vacation), but I would very much hope that there would be nothing preventing having both the JNI and native-Java libraries in the classpath, and allowing for explicit creation of the desired implementation of Connection / Messenger / whatever (which I'd probably suggest be done via a factory rather explicit construction, but that's just personal taste). I would hope the Service Loader would only affect the implementation created by *default* from a factory -- Rob On 4 January 2013 22:54, Phil Harvey p...@philharveyonline.com wrote: The in-progress code on the jni branch does not currently allow this, although is no technical barrier to doing it. We just haven't yet decided on the nicest api for allowing the application to choose the implementation it wants. The ability to mix implementations within a jvm will certainly be nice when writing interoperability tests. Phil On Jan 4, 2013 9:16 PM, Rafael Schloming r...@alum.mit.edu wrote: Does that mean you won't be able to use both the C and Java implementation simultaneously within a single JVM? --Rafael On Fri, Jan 4, 2013 at 4:02 PM, Phil Harvey p...@philharveyonline.com wrote: Ditto for Java. From the developer's point of view, they'll simply be using the Java interfaces in proton-api such as Connection [1]. Our current intention is that the choice of whether to use the pure Java implementations or the proton-c-via-Swig-via-JNI one will be made using a factory instantiated by a java.util.ServiceLoader. The decision will therefore depend on your runtime classpath. Client code will not have a build time dependency on the Swig/JNI layer. [1] http://svn.apache.org/repos/asf/qpid/proton/trunk/proton-j/proton-api/src/main/java/org/apache/qpid/proton/engine/Connection.java On 4 January 2013 20:40, Darryl L. Pierce dpie...@redhat.com wrote: On Fri, Jan 04, 2013 at 03:32:44PM -0500, Rafael Schloming wrote: Given what little I know of loading JNI stuff, that seems to make sense for Java. FWIW, the python and ruby bindings don't ever actually expose the name of the C extension library since in both cases we have the so-called buttercream frosting layer that wraps the raw C extension module. I would hope we'd have something similar for perl and Java so that these names shouldn't ever be visible to users. Per does. It uses qpid::proton namespace for the Message and Messenger classes. -- Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc. Delivering value year after year. Red Hat ranks #1 in value among software vendors. http://www.redhat.com/promo/vendor/
inconsistent proton library names?
I've been working on the Java binding of proton-c and have a couple of questions about how we're naming our various libraries. On Linux, running make all produces the following: bindings/ruby/cproton.so bindings/python/_cproton.so bindings/perl/libcproton_perl.so bindings/libproton-swig.so (on JNI branch only) libqpid-proton.so === 1. Naming conventions All things being equal, we should adopt a consistent approach regarding: - whether to put a lib prefix on the file name (my preference is to always do this) - whether the language name should appear in the bindings libraries. I'm guessing that all things are *not* equal, and that we have deliberately named the bindings differently for some reason. Can anyone enlighten me? 2. The lib prefix on old cmake versions Regarding the lib prefix, I am using an old version of cmake (v2.6) which does not add the prefix by default. I can add 'set_target_properties(proton-xxx PROPERTIES PREFIX lib)' as a workaround. This still works ok on newer cmake versions. Unfortunately I think this will force Windows dll's to have the lib prefix, which is undesireable. Can anyone advise on the best approach? I'm not a cmake expert. Thanks Phil
Re: inconsistent proton library names?
Thanks for the responses guys. That all makes sense. The only change that I'd propose is therefore that the Perl and Java bindings: bindings/perl/libcproton_perl.so bindings/java/libproton-swig.so ... should both be renamed to libcproton.so. Compared to the other bindings, it seems inconsistent for the former to state its Perl-ness in its name, and for the latter to state its Swig-ness. Thoughts? Phil On Fri, Jan 04, 2013 at 11:04:31AM -0500, Ted Ross wrote: Phil, The only shared-object in that list that is a proper library is libqpid-proton.so. The others are extension modules for their various scripting languages. I'm not 100% sure, but I believe that the naming conventions are dictated by the scripting language's extension mechanisms. That's true. The Ruby VM requires the name for a native extension library has to match the name of the extension, and also the initialization entry point in the library; i.e., in order to do a require 'qpid_proton' we need a file named qpid_proton.so that has an method named Init_qpid_proton inside. -- Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc. Delivering value year after year. Red Hat ranks #1 in value among software vendors. http://www.redhat.com/promo/vendor/
Re: inconsistent proton library names?
Ditto for Java. From the developer's point of view, they'll simply be using the Java interfaces in proton-api such as Connection [1]. Our current intention is that the choice of whether to use the pure Java implementations or the proton-c-via-Swig-via-JNI one will be made using a factory instantiated by a java.util.ServiceLoader. The decision will therefore depend on your runtime classpath. Client code will not have a build time dependency on the Swig/JNI layer. [1] http://svn.apache.org/repos/asf/qpid/proton/trunk/proton-j/proton-api/src/main/java/org/apache/qpid/proton/engine/Connection.java On 4 January 2013 20:40, Darryl L. Pierce dpie...@redhat.com wrote: On Fri, Jan 04, 2013 at 03:32:44PM -0500, Rafael Schloming wrote: Given what little I know of loading JNI stuff, that seems to make sense for Java. FWIW, the python and ruby bindings don't ever actually expose the name of the C extension library since in both cases we have the so-called buttercream frosting layer that wraps the raw C extension module. I would hope we'd have something similar for perl and Java so that these names shouldn't ever be visible to users. Per does. It uses qpid::proton namespace for the Message and Messenger classes. -- Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc. Delivering value year after year. Red Hat ranks #1 in value among software vendors. http://www.redhat.com/promo/vendor/
Re: inconsistent proton library names?
The in-progress code on the jni branch does not currently allow this, although is no technical barrier to doing it. We just haven't yet decided on the nicest api for allowing the application to choose the implementation it wants. The ability to mix implementations within a jvm will certainly be nice when writing interoperability tests. Phil On Jan 4, 2013 9:16 PM, Rafael Schloming r...@alum.mit.edu wrote: Does that mean you won't be able to use both the C and Java implementation simultaneously within a single JVM? --Rafael On Fri, Jan 4, 2013 at 4:02 PM, Phil Harvey p...@philharveyonline.com wrote: Ditto for Java. From the developer's point of view, they'll simply be using the Java interfaces in proton-api such as Connection [1]. Our current intention is that the choice of whether to use the pure Java implementations or the proton-c-via-Swig-via-JNI one will be made using a factory instantiated by a java.util.ServiceLoader. The decision will therefore depend on your runtime classpath. Client code will not have a build time dependency on the Swig/JNI layer. [1] http://svn.apache.org/repos/asf/qpid/proton/trunk/proton-j/proton-api/src/main/java/org/apache/qpid/proton/engine/Connection.java On 4 January 2013 20:40, Darryl L. Pierce dpie...@redhat.com wrote: On Fri, Jan 04, 2013 at 03:32:44PM -0500, Rafael Schloming wrote: Given what little I know of loading JNI stuff, that seems to make sense for Java. FWIW, the python and ruby bindings don't ever actually expose the name of the C extension library since in both cases we have the so-called buttercream frosting layer that wraps the raw C extension module. I would hope we'd have something similar for perl and Java so that these names shouldn't ever be visible to users. Per does. It uses qpid::proton namespace for the Message and Messenger classes. -- Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc. Delivering value year after year. Red Hat ranks #1 in value among software vendors. http://www.redhat.com/promo/vendor/
Re: SSL related noise in tests? (was Re: Proton 0.3 ETA?)
Yep, this has been addressed in PROTON-179. On Dec 5, 2012 2:44 PM, Rob Godfrey rob.j.godf...@gmail.com wrote: Yes - I'm sure Phil is working on a patch to remove this, the longer term solution being to add some sort of cross-platform logging support -- Rob On 5 December 2012 15:40, Gordon Sim g...@redhat.com wrote: On 12/05/2012 01:56 PM, Gordon Sim wrote: On 12/05/2012 01:09 PM, Hiram Chirino wrote: We getting closer? I would like to start cutting release candidates for ActiveMQ and I can't do that until proton-j is released. I'd like to get the messenger API and implementation in before we do that. My plan was to try and add in the acknowledgement support that was added to the C version and then check it all in. If I don't have time for that, then I would at least check in what is ready now. BTW, I'm seeing lots of 'noise' printed to the console when running the proton-j tests. They seem to all be related to SSL. Is this a known issue? --**--**- To unsubscribe, e-mail: dev-unsubscribe@qpid.apache.**org dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
Re: Problems building and running proton-c on Linux RHEL 5
Thanks for the reply Ted. In an attempt to avoid the uuid module problem I've managed to upgrade to a later version of Python (2.6), and am now hitting a different error -- now at build time. When I run make all I get the following error: $ cd build $ cmake -DCMAKE_INSTALL_PREFIX=/usr .. $ make all ... ... Scanning dependencies of target _cproton [ 86%] Swig source /home/phil/dev/proton/proton- c/include/proton/engine.h:67: Warning(451): Setting a const char * variable may leak memory. [ 91%] Building C object bindings/python /CMakeFiles/_cproton.dir/pythonPYTHON_wrap.c.o /home/phil/dev/proton/proton-c/build/bindings/python/pythonPYTHON_wrap.c: In function ‘SWIG_Python_ConvertFunctionPtr’: /home/phil/dev/proton/proton-c/build/bindings/python/pythonPYTHON_wrap.c:2035: warning: initialization discards qualifiers from pointer target type /home/phil/dev/proton/proton-c/build/bindings/python/pythonPYTHON_wrap.c: In function ‘SWIG_AsCharPtrAndSize’: /home/phil/dev/proton/proton-c/build/bindings/python/pythonPYTHON_wrap.c:2548: warning: passing argument 3 of ‘PyString_AsStringAndSize’ from incompatible pointer type /home/phil/dev/proton/proton-c/build/bindings/python/pythonPYTHON_wrap.c: In function ‘SWIG_Python_FixMethods’: /home/phil/dev/proton/proton-c/build/bindings/python/pythonPYTHON_wrap.c:19222: warning: initialization discards qualifiers from pointer target type Linking C shared module _cproton.so /usr/bin/ld: /usr/local/lib/python2.6/config/libpython2.6.a(abstract.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/python2.6/config/libpython2.6.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [bindings/python/_cproton.so] Error 1 make[1]: *** [bindings/python/CMakeFiles/_cproton.dir/all] Error 2 make: *** [all] Error 2 This looks like it could be an environmental problem rather than an issue with Proton itself, but any suggestions about how to work around it would be gratefully received. I don't build a lot of C code so am not the best at diagnosing this kind of error. Thanks, Phil On 29 November 2012 19:44, Ted Ross tr...@redhat.com wrote: Phil, With regard to the python uuid issue, this was handled in qpid (see qpid/python/build/lib/qpid/**datatypes.py). Perhaps proton needs to use a similar approach. -Ted On 11/29/2012 11:13 AM, Phil Harvey wrote: I'm having problems building and running proton-c on my machine. I'm hitting two problems so far: - I get an Unable to find 'php.swg' error. A web search suggests that this relates to the version of swig I have installed (it's v1.3.29). My current workaround is to comment out the php Swig stuff in the make file. - When I try to run the Python tests, Python errors with the message No module named uuid. Again, I believe this is a versioning problem. I'm running Python 2.4, and I believe the uuid module was included in later versions of Python. Unfortunately my rights to upgrade the packages on my machine are quite limited. I'm interested to know if others have seen this problem and whether they believe any changes need to be made to the make files, or is my environment simply too old to be supported by Proton? Also, although dependencies such as Python and Swig are mentioned in proton-c/README, the required *versions* of them are not. Is this stuff written down anywhere? If not, does anyone know what the required versions of the dependencies actually are? Thanks Phil
Re: Unexpected behaviour in test ssl.py - possible bug?
Hi Ken, Thanks for looking into it. I've raised https://issues.apache.org/jira/browse/PROTON-171 and assigned it to you. Phil On Nov 28, 2012 6:19 PM, Ken Giusti kgiu...@redhat.com wrote: Hi Phil, I think you've uncovered a bug - definitely raise a jira and assign it to me. Now that you mention it, I'm surprised that the original version of pump() worked properly - seems like it would risk discarding any output not consumed by the input handler.Your changes to pump() should be an improvement. -K - Original Message - Hi, We've been working on the Java SSL implementation and are seeing a test fail against proton-c but that works against proton-j. We're not sure if the problem is in proton-c, or in our modified test, and are hoping someone who knows about proton-c's SSL implementation can give a view on this before we raise a Jira. One of the scenarios we wanted to cover in our testing was the case where the Transport input method leaves left-overs, e.g. when you call server.input() with 100 bytes of input, but it only accepts 20, as indicated by its return value. For example, we expect this to happen if the preceding client.output() call is told to write to a buffer sized such that its output contains a trailing *fragment* of an SSL packet, which input() won't be able to decipher. We therefore modified the pump method in proton/tests/proton_tests/ssl.py to handle this case. In its loop, it now captures the bytes left over after calling input(), and prepends them to the input() invocation in the next iteration. The buffer size is now a parameter so individual tests can exercise the packet fragmenting behaviour described above. We made the following change: --- diff --git a/tests/proton_tests/ssl.py b/tests/proton_tests/ssl.py index 8567b1b..237c3da 100644 --- a/tests/proton_tests/ssl.py +++ b/tests/proton_tests/ssl.py @@ -43,13 +43,32 @@ class SslTest(common.Test): self.t_client = None self.t_server = None -def _pump(self): +def _pump(self, buffer_size=1024): + +Make the transport send up to buffer_size bytes (this will be the AMQP +header and open frame) returning a buffer containing the bytes +sent. Transport is stateful so this will return 0 when it has +no more frames to send. +TODO this function is duplicated in sasl.py. Should be moved to a common place. + +out_client_leftover_by_server = +out_server_leftover_by_client = +i=0 while True: -out_client = self.t_client.output(1024) -out_server = self.t_server.output(1024) -if out_client: self.t_server.input(out_client) -if out_server: self.t_client.input(out_server) + +out_client = out_client_leftover_by_server + self.t_client.output(buffer_size) +out_server = out_server_leftover_by_client + self.t_server.output(buffer_size) + +if out_client: +number_server_consumed = self.t_server.input(out_client) +out_client_leftover_by_server = out_client[number_server_consumed:] # if it consumed everything then this is empty + +if out_server: +number_client_consumed = self.t_client.input(out_server) +out_server_leftover_by_client = out_server[number_client_consumed:] # if it consumed everything then this is empty + if not out_client and not out_server: break +i=i+1 def _testpath(self, file): Set the full path to the certificate,keyfile, etc. for the test. --- Several ssl tests now fail when run against proton-c, all with the same error. This surprised us because we hadn't started playing with the buffer size yet - we were still using the default of 1024. For example, test_server_authentication gives this output: proton_tests.ssl.SslTest.test_server_authentication .[0xa2ca208:0] ERROR[-2] SSL Failure: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac fail Error during test: Traceback (most recent call last): File ./tests/proton-test, line 331, in run phase() File /home/phil/dev/proton/tests/proton_tests/ssl.py, line 166, in test_server_authentication self._pump() File /home/phil/dev/proton/tests/proton_tests/ssl.py, line 63, in _pump number_server_consumed = self.t_server.input(out_client) File /home/phil/dev/proton/proton-c/bindings/python/proton.py, line 2141, in input return self._check(n) File /home/phil/dev/proton/proton-c/bindings/python/proton.py, line 2115, in _check raise