Re: Proposed SASL changes (API and functional)
This is from Andrew's wiki comment. Sorry to paste it back to the list, but I'm having some difficulty commenting there: 1. Setters with no getters Philosophically I don't agree that you need to make all properties read/write. I see no particular reason to make these properties readable since they will never change once they are set, or in the case of the password should actually not be accessible once set (because the implementation *should* be scrubbing the bytes from memory after use). In my view if the application needs the value again it already has it. In the case of the read-only property authenticated user I definitely think that needs to be read only. Having said that, I don't feel that strongly about getters for the client username and hostname. There's actually an important point here worth noting. With reactive use, I don't think it's true, pragmatically speaking, that the application has the value again when it needs it. When your primary means of operating on a connection is through handling events, the only state you have easy access to is what is provided by those events. Taking your suggestion, if I wanted to do something simple like log a debug statement from an event handler and include the hostname and/or username of the connection in that info, my only recourse would be to malloc some sort of ancillary struct and attach it to the connection and fill it in with the very same hostname that the connection is holding internally, or alternatively access some sort of global state that holds a map from the connection back to that same information. If your point is that this is possible then of course that's true, but it doesn't seem at all reasonable. 1. inconsistency with existing use of remote in API I take your point - I'm happy to remove remote from the API name - would connected be all right? pn_transport_set_hostname() just doesn't seem specific enough to me - it might just as well be telling the transport what *our* hostname is. 2. Redundancy of pn_connection_set_hostname() and pn_transport_set_something_hostname() Yes these are definitely redundant, and I would need to deprecate the connection version and set it from the transport when binding them together - good catch. The transport version must be primary, as (in principle at least, if not in the current implementation) you don't need the connection object until you have authenticated the transport and authentication and encryption may to know need the hostname you are connecting to. I think it would have to be an error to rebind (on reconnect) a connection with a different hostname than the transport hostname. This isn't consistent with how the connection and transport are actually used. With the reactive API, the user creates the connection endpoint first and configures it with all the relevant details that it cares about. The transport isn't created at all until the client actually opens the connection (which could be somewhere completely different from where it configures the connection). It's also important to note that the user doesn't actually create the transport at all. The default handlers do this when the PN_CONNECTION_LOCAL_OPEN event occurs. The users don't even need to be aware that a transport object exists at all if they don't care to customize it. This is a nice property and would be difficult to maintain if you start pushing connection level configuration like hostname into the transport. I think if you flip things around it actually makes more sense. As a server you are going to have a transport prior to having a connection, and in this case you want to access the hostname-that-your-remote-peer desires for vhost purposes, but it makes no sense to actually set it. As a client, a transport is pretty much useless until it is bound to a connection, as you can't safely do much more than send the protocol header, so the natural sequence is to create the connection first and set the hostname you want to connect to, and not worry about the Transport. 1. Having a separate set_user/set_password That would make sense. However from this conversation I'm wondering actually if we should more carefully distinguish the client and server ends. And so have a client only API to set user/password and a server only API to extract the authenticated username. So in conclusion how about: - Changing pn_transport_set_remote_hostname() → pn_transport_set_connect_hostname() (connect/connected/connection?) - Adding pn_transport_get_connect_hostname() (connect/connected/connection?) - Deprecating pn_connection_set/get_hostname() in favour of pn_transport_set/get_connect_hostname() Actually changing the pn_transport_bind() code would be required too. - Removing pn_transport_set_user_password() and pn_transport_get_user() - Replacing them with
Re: PROTON-827: Reactive client binding for the go programming language
Always remember the newline to protect against automated signatures? :) On 26 February 2015 at 09:46, Dominic Evans dominic.ev...@uk.ibm.com wrote: http://golang.org/cmd/cgo/ :/ -- Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
[jira] [Commented] (PROTON-829) Possible reference counting bug in pn_clear_tpwork
[ https://issues.apache.org/jira/browse/PROTON-829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14338658#comment-14338658 ] Rafael H. Schloming commented on PROTON-829: What really motivates the current scheme is not the pooling, but the need to avoid circular references. The object graph simply has circular references and there is no reasonable way to avoid this in the current design, yet all these objects need to be wrappable into python, which means you need to be able to present an incref/decref style memory management interface. As far as I can tell, that pretty much demands that you have a hybrid refcount/incremental gc scheme of some sort. I fully agree that pn_incref(...)/pn_decref(...) is an incredibly obscure way to trigger an incremental garbage collect though and the code needs to be better factored to make the design more transparent. Possible reference counting bug in pn_clear_tpwork -- Key: PROTON-829 URL: https://issues.apache.org/jira/browse/PROTON-829 Project: Qpid Proton Issue Type: Bug Components: proton-c Affects Versions: 0.8 Reporter: Alan Conway Assignee: Alan Conway Fix For: 0.9 See QPID-6415 which describes a core dump in the qpid tests that appears when using the current 0.9 proton master. The qpid tests pass OK with proton 0.8. The valgrind output in QPID-6415 shows that a connection is deleted while it is being finalized by a call from pn_connection_unbound to pn_clear_tpwork. I do not yet understand the details, but removing the following strange code fixes the problem and passes the proton test suite without valgrind errors: {noformat} --- a/proton-c/src/engine/engine.c +++ b/proton-c/src/engine/engine.c @@ -690,10 +690,10 @@ void pn_clear_tpwork(pn_delivery_t *delivery) { LL_REMOVE(connection, tpwork, delivery); delivery-tpwork = false; -if (pn_refcount(delivery) 0) { - pn_incref(delivery); - pn_decref(delivery); -} } } {noformat} The code is strange because a) you should never examine a refcount except for debugging purposes b) under normal refcounting semantics incref+decref is a no-op. Is removing this code OK? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PROTON-829) Possible reference counting bug in pn_clear_tpwork
[ https://issues.apache.org/jira/browse/PROTON-829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14338534#comment-14338534 ] Alan Conway commented on PROTON-829: Thanks for the info, I agree that it is not clear yet whether the bug is in proton or qpid, I'll keep both this and QPID-6415 open till it is. My feeling is that this is an abuse of a simple and widely-known memory management pattern that will lead people into errors (e.g. how would anybody coming to the code know when they need to do the apparent incref/decref no-op and when they don't?) My feeling is that memory pooling and re-use should happen at a lower level and not be mixed in with object finalizers (e.g. like qpid-dispatchs allocators) but I don't understand the context very well yet so I'll say no more. Possible reference counting bug in pn_clear_tpwork -- Key: PROTON-829 URL: https://issues.apache.org/jira/browse/PROTON-829 Project: Qpid Proton Issue Type: Bug Components: proton-c Affects Versions: 0.8 Reporter: Alan Conway Assignee: Alan Conway Fix For: 0.9 See QPID-6415 which describes a core dump in the qpid tests that appears when using the current 0.9 proton master. The qpid tests pass OK with proton 0.8. The valgrind output in QPID-6415 shows that a connection is deleted while it is being finalized by a call from pn_connection_unbound to pn_clear_tpwork. I do not yet understand the details, but removing the following strange code fixes the problem and passes the proton test suite without valgrind errors: {noformat} --- a/proton-c/src/engine/engine.c +++ b/proton-c/src/engine/engine.c @@ -690,10 +690,10 @@ void pn_clear_tpwork(pn_delivery_t *delivery) { LL_REMOVE(connection, tpwork, delivery); delivery-tpwork = false; -if (pn_refcount(delivery) 0) { - pn_incref(delivery); - pn_decref(delivery); -} } } {noformat} The code is strange because a) you should never examine a refcount except for debugging purposes b) under normal refcounting semantics incref+decref is a no-op. Is removing this code OK? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Proposed SASL changes (API and functional)
On Wed, Feb 25, 2015 at 7:45 PM, Andrew Stitcher astitc...@redhat.com wrote: If it is ok with them I will copy the comments over there: Alan, Jakub? Sorry, I missed the wiki part. Feel free to copy my comment there if you want.
[jira] [Resolved] (PROTON-828) Python binding does not support MODIFIED delivery state
[ https://issues.apache.org/jira/browse/PROTON-828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ken Giusti resolved PROTON-828. --- Resolution: Fixed Python binding does not support MODIFIED delivery state --- Key: PROTON-828 URL: https://issues.apache.org/jira/browse/PROTON-828 Project: Qpid Proton Issue Type: Bug Components: python-binding Affects Versions: 0.8 Reporter: Ken Giusti Assignee: Ken Giusti Priority: Blocker Fix For: 0.9 import proton proton.RELEASED RELEASED proton.MODIFIED Traceback (most recent call last): File stdin, line 1, in module AttributeError: 'module' object has no attribute 'MODIFIED' proton.ACCEPTED ACCEPTED -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (PROTON-829) Possible reference counting bug in pn_clear_tpwork
[ https://issues.apache.org/jira/browse/PROTON-829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14338377#comment-14338377 ] Rafael H. Schloming commented on PROTON-829: I added that code to get the python tests to pass without valgrind errors, so I'm surprised that removing it doesn't cause issues. I'll try to explain a little bit about what's going on. (I've commented this elsewhere in the code, but I missed this spot. I think andrew pointed out that this might be clearer with a helper function instead of/in addition to a comment.) There are definitely non-debug cases where it is useful to examine the refcount. In particular, a finalizer gets fired when the refcount hits zero, but the finalizer can sometimes run code that creates a new reference to the object that is being finalized. There are a few places where this pattern is very useful: - The finalizer can run some logic that decides whether or not the object should be pooled and if so sticking it in the pool adds a reference. - The finalizer might run some code that posts an event, thereby creating a reference from the collector to the object. - The connection/session/link/delivery/transport objects all have pointers to each other, and to just naively refcount all those pointers would result in circular references and memory leaks. In order to avoid circular references, we don't refcount internal pointers, e.g. the parent-child pointers, instead the finalizers of all the children have logic that runs and figures out if it is safe for the child to go away (basically does a tiny amount of incremental garbage collection). If the finalizer's garbage collection finds an internal reference, the parent of that object takes over the last external reference. In the latter case an external pointer to a child may be created again, and to account for this, the incref of all the child objects is overriden to remove the reference that the parent took over. In all these cases, the logic that may/may not create a new reference is run at the beginning of the finalizer. The finalizer then needs to check if a new reference has actually been created in order to figure out whether or not to actually free the resources used by the object or to return. This pattern could perhaps be formalized by distinguishing between finalizers (the part that runs application logic which can create new references) and destructors (the part that actually frees anything malloced in the initializer) in the object code. This should also explain why incref/decref is not necessarily a noop, it is basically retriggering the gc portion of the code in the finalizer that decides whether or not an object should really go away or not, and in the rare case that you toggle some state that might influence that gc logic, you need to get it to run again. Clearing the tp work queue is one such case. The reason it is guarded by a check on the refcount is that we also call that function from inside the delivery finalizer, and in that case if the refcount is zero we've already decided that the object should go away. Again, this gc part of the pattern could probably be formalized a bit, e.g. possibly having some sort of mark/sweep style thing that is in fact triggered by external references reaching zero, but probably not a good idea to attempt anytime soon since what complicates this picture is not so much the fact that it isn't formalized, but rather that the engine uses ad-hoc data structures that predate the development of the collections that are part of the object library and therefore has to manually account and search for references in a type specific way rather than using the automatic ref counting and/or search routines available in the collections. This manual/type specific accounting is what makes the gc logic a bit brittle, and could possibly be the source of the bug you are seeing. At some point relatively soon (probably after 0.9 but before the next release) I expect to have to update a significant chunk of those data structures to properly use the collections, but until then if you manage to pare this down to a minimal reproducer in terms of proton API calls, that would definitely suggest the bug is in proton and I can probably help debug more from there. As it is, it looks like it could possibly just be a plain old double free style bug in messaging, and removing the above code just masks it by introducing a memory leak. Possible reference counting bug in pn_clear_tpwork -- Key: PROTON-829 URL: https://issues.apache.org/jira/browse/PROTON-829 Project: Qpid Proton Issue Type: Bug Components: proton-c Affects Versions: 0.8 Reporter: Alan Conway Assignee: Alan Conway Fix For: 0.9
[jira] [Commented] (PROTON-829) Possible reference counting bug in pn_clear_tpwork
[ https://issues.apache.org/jira/browse/PROTON-829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14338757#comment-14338757 ] Alan Conway commented on PROTON-829: Found the fix to the qpid problem as follows, I think there is still a (minor) issue for proton. The fix is to call pn_transport_free *after* pn_connection_free. The other way round works fine with proton 0.7 or 0.8 but causes memory errors and crashes proton 0.9. The API doc for transport_free says: {code} /** * Free a transport object. * * When a transport is freed, it is automatically unbound from its * associated connection. * * @param[in] transport a transport object or NULL PN_EXTERN void pn_transport_free(pn_transport_t *transport); */ {code} It does NOT say that the connection is freed. So either we need to modify the behavior so the connection is not freed (preferable since this is backwards compatible) or we need to change the API doc text to make it clear and release note the change. Possible reference counting bug in pn_clear_tpwork -- Key: PROTON-829 URL: https://issues.apache.org/jira/browse/PROTON-829 Project: Qpid Proton Issue Type: Bug Components: proton-c Affects Versions: 0.8 Reporter: Alan Conway Assignee: Alan Conway Fix For: 0.9 See QPID-6415 which describes a core dump in the qpid tests that appears when using the current 0.9 proton master. The qpid tests pass OK with proton 0.8. The valgrind output in QPID-6415 shows that a connection is deleted while it is being finalized by a call from pn_connection_unbound to pn_clear_tpwork. I do not yet understand the details, but removing the following strange code fixes the problem and passes the proton test suite without valgrind errors: {noformat} --- a/proton-c/src/engine/engine.c +++ b/proton-c/src/engine/engine.c @@ -690,10 +690,10 @@ void pn_clear_tpwork(pn_delivery_t *delivery) { LL_REMOVE(connection, tpwork, delivery); delivery-tpwork = false; -if (pn_refcount(delivery) 0) { - pn_incref(delivery); - pn_decref(delivery); -} } } {noformat} The code is strange because a) you should never examine a refcount except for debugging purposes b) under normal refcounting semantics incref+decref is a no-op. Is removing this code OK? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Proposed SASL changes (API and functional)
On Thu, 2015-02-26 at 12:28 +, Robbie Gemmell wrote: ... I'm going to post my comments here and on the wiki, as I dont think many (except maybe you) will actually see them on the wiki ;) Thank you for the excellent feedback. I'm going to answer on the wiki - as it'll save me from cutting and pasting. [I did try to add the lists as watchers, but that doesn't seem to be possible in any obvious way] wiki link: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=51812103focusedCommentId=51812468#comment-51812468 Andrew
[jira] [Updated] (PROTON-829) Possible reference counting bug in pn_clear_tpwork
[ https://issues.apache.org/jira/browse/PROTON-829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Conway updated PROTON-829: --- Assignee: Rafael H. Schloming (was: Alan Conway) Possible reference counting bug in pn_clear_tpwork -- Key: PROTON-829 URL: https://issues.apache.org/jira/browse/PROTON-829 Project: Qpid Proton Issue Type: Bug Components: proton-c Affects Versions: 0.8 Reporter: Alan Conway Assignee: Rafael H. Schloming Fix For: 0.9 See QPID-6415 which describes a core dump in the qpid tests that appears when using the current 0.9 proton master. The qpid tests pass OK with proton 0.8. The valgrind output in QPID-6415 shows that a connection is deleted while it is being finalized by a call from pn_connection_unbound to pn_clear_tpwork. I do not yet understand the details, but removing the following strange code fixes the problem and passes the proton test suite without valgrind errors: {noformat} --- a/proton-c/src/engine/engine.c +++ b/proton-c/src/engine/engine.c @@ -690,10 +690,10 @@ void pn_clear_tpwork(pn_delivery_t *delivery) { LL_REMOVE(connection, tpwork, delivery); delivery-tpwork = false; -if (pn_refcount(delivery) 0) { - pn_incref(delivery); - pn_decref(delivery); -} } } {noformat} The code is strange because a) you should never examine a refcount except for debugging purposes b) under normal refcounting semantics incref+decref is a no-op. Is removing this code OK? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: I think that's a blocker...
On Wed, Feb 25, 2015 at 9:23 PM, Cliff Jansen cliffjan...@gmail.com wrote: Two usage cases, very desirable in Proton, happen to be mostly trivial on POSIX but mostly non-trivial on Windows: multi-threading and external loops. Proton io and selector classes are POSIX-y in look and feel, but IO completion ports are very different and can mimic the POSIX functionality only so far. The documented restrictions I wrote (PROTON-668) tries to curb expectations for cross platform functionality (but still allow use cases like Dispatch). This, however, is not a cross platform issue. This particular problem is confined to user space and affects all platforms equally. Presumably the API needs fixing or we agree to take a step backwards. There is a pretty severe limitation without this fix since there is no way for the API to communicate socket errors back to the user. Also, even if we roll back the fix, the wouldblock flag is accessed with exactly the same pattern as the error slot. I would assume this means there is still an issue even with the fix rolled back since presumably two different threads could overwrite the wouldblock and you could get a hang or an error since one/both of them could get the wrong value. Note that Bozo previously pointed out (Proton mailing list) that the pn_io_t API had threading inconsistencies with pn_io_error() and pn_io_wouldblock(). Perhaps a pn_selectable_t should be passed in instead of a socket parameter, or proton should maintain a map of errors and wouldblock state for sockets until they are pn_close'd (as the Windows code already does for completion port descriptor state). The former would be more consistent with proton overall, the latter would require some user space locking. Another possibility could be to pass in pointers to error/wouldblock state as part of the io call. Passing in specific pointers to error and/or wouldblock state seems like it is less flexible than passing in a pointer to something more abstract that can contain not only the error state, but also whatever other thread local state makes sense. My assumption is/was that pn_io_t provides this context. While I still think this is not a Windows issue, and the documentation is supposed to reflect the Dispatch pattern and not handcuff it, here is more about the pn_io_t implementation: Global state is in struct iocp_t, per socket state is in struct iocpdesc_t (see iocp.h, the OS looks after this stuff in POSIX) It has to set up and tear down the Windows socket system. It has to know if the application is using an external loop or pn_selector_select Setup the completion port subsystem (unless external loop) It has to find IOCP state for each passed in socket Manage multiple concurrent io operations per socket (N writes + 1 read) Notify PN_READABLE, PN_WRITABLE, etc changes to the selector (if any) for each call Do an elaborate tear down on pn_io_free (drain completions, force close dangling sockets) Regarding the documentation, I looked at Dispatch, which had been using Proton in a multi-threaded manner for some time with considerable success. The old driver.c (now deprecated) allowed simultaneous threads to do pn_connector_process() - but no two threads sharing a connector/transport/socket pn_driver_wait_n() combined with pn_connector_next() - only one thread waiting/choosing at a time pn_driver_wakeup() - any thread, any time, to unstick things everything else (listen accept connect) considered non-thread safe which provides plenty of parallelism if you have more than one connection. The documentation I wrote tried to say that you could do that much on any platform, but no more (without risking undefined behavior). Things (and the documentation) get further complicated by supporting external loops, which prevents the use of IO completion ports completely for a given pn_io_t and uses a different set of system calls. Perhaps the doc restrictions could be summarized as: One pn_io_t/pn_selector_t, one thread - no restrictions One pn_io_t/pn_selector_t, multi threads - limited thread safety (Dispatch) One pn_io_t, no pn_selector_t, external loop, one thread - no restrictions One pn_io_t, no selector, external loop, multi threads - ??? multiple pn_io_t: doable, but sockets must stick to one pn_io_ Some difficulties you might not expect: Linux doesn't care if sockets move between selectors, or if one thread is reading on a socket while another is writing to the same one. Simple things like this would have major design and performance implications for Proton on Windows. It sounds like one way or another we need at least some design changes. I don't think it's workable to have overlapping/close but distinct semantics for the API on different platforms (e.g. you can move sockets on one platform but not on another). I'm starting to think we either need one platform to precisely and fully emulate
Re: Proposed SASL changes (API and functional)
On Wed, 2015-02-25 at 13:45 -0500, Andrew Stitcher wrote: On Tue, 2015-02-24 at 15:48 -0500, Andrew Stitcher wrote: ... If you are at all interested please go and look at the proposal and comment on it there. Thank you very much to Alan and Jakub for commenting on my proposal. The reason I asked people to comment over on the wiki is that it is very hard to find a discussion like this related to a specific proposal after some time has elapsed if it is in email, whereas actually attached to the proposal on the wiki keeps all the relevant comments together. If it is ok with them I will copy the comments over there: Alan, Jakub? Copy away
Re: I think that's a blocker...
On Thu, 2015-02-26 at 15:09 -0500, Rafael Schloming wrote: ... It sounds like one way or another we need at least some design changes. I don't think it's workable to have overlapping/close but distinct semantics for the API on different platforms (e.g. you can move sockets on one platform but not on another). I'm starting to think we either need one platform to precisely and fully emulate the semantics of the other platform, or they both need to implement some interface that is slightly higher level and can better accommodate the differences. I may have misremembered, but I think the essential platform difference here is that Windows IOCP really tries to implement a Proactor type of pattern rather than the reactor type of pattern that we are using in Proton. [Proactor is where fundamentally the system calls back a processing function on a thread that you give it] In the qpid implementation I had to effectively make the exposed interface a proactor type interface in all platforms, to bridge the gap. I'm not sure this is workable in a context where the user can supply their own event loop. Andrew
Re: PROTON-827: Reactive client binding for the go programming language
On Thu, 2015-02-26 at 13:43 -0500, Alan Conway wrote: On Thu, 2015-02-26 at 09:39 +, Dominic Evans wrote: Hi Alan, -Alan Conway acon...@redhat.com wrote: - I plan to start working on a go golang.org binding for proton. I envisage a SWIG binding similar to the other swig-based bindings (python, ruby, etc.) and an API layer similar to the new reactive Python API (based on the C reactor.) This will be an exploratory effort to begin with, I'd like to hear from anybody who might be interested in using such a thing or helping to implement it. This is certainly something I'd be interested in. However, as far as I was aware, the usefulness of SWIG for Go was where you needed to wrapper C++ libraries. If you're just planning on wrapping the proton-c reactor code, wouldn't we simply use cgo [1]? Maybe. The go docs mention both swig and cgo. My initial assumption was that since we already have a well defined swig layer that is used by everything else, that probably would make sense. However I haven't looked at cgo in detail yet so if it has big advantages over swig then it is a possibility. In principle if possible I would avoid using swig! aside I think it would actually be nice to use cffi for the python bindings - the fewer extra build dependencies for the minimal testable build the better. Since our testing uses python - I would be very much in favour of recasting the proton.py code in terms of cffi (or similar) and avoid using the swigged cproton.so library. /aside Andrew
[jira] [Commented] (PROTON-830) [IGNORE ME] GitHub integration test
[ https://issues.apache.org/jira/browse/PROTON-830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14338197#comment-14338197 ] ASF GitHub Bot commented on PROTON-830: --- GitHub user gemmellr opened a pull request: https://github.com/apache/qpid-proton/pull/9 PROTON-830: trivial README change, testing GitHub integration As per the subject. You can merge this pull request into a Git repository by running: $ git pull https://github.com/gemmellr/qpid-proton asf-github-integration Alternatively you can review and apply these changes as the patch at: https://github.com/apache/qpid-proton/pull/9.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #9 commit 78ce718dc6a7618baa502b7baef38a52b43bfacd Author: Robert Gemmell rob...@apache.org Date: 2015-02-26T10:22:49Z PROTON-830: trivial README change, testing GitHub integration [IGNORE ME] GitHub integration test --- Key: PROTON-830 URL: https://issues.apache.org/jira/browse/PROTON-830 Project: Qpid Proton Issue Type: Bug Reporter: Robbie Gemmell Assignee: Robbie Gemmell JIRA to perform some GitHub integration tests -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] qpid-proton pull request: PROTON-830: trivial README change, testi...
GitHub user gemmellr opened a pull request: https://github.com/apache/qpid-proton/pull/9 PROTON-830: trivial README change, testing GitHub integration As per the subject. You can merge this pull request into a Git repository by running: $ git pull https://github.com/gemmellr/qpid-proton asf-github-integration Alternatively you can review and apply these changes as the patch at: https://github.com/apache/qpid-proton/pull/9.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #9 commit 78ce718dc6a7618baa502b7baef38a52b43bfacd Author: Robert Gemmell rob...@apache.org Date: 2015-02-26T10:22:49Z PROTON-830: trivial README change, testing GitHub integration --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (PROTON-830) [IGNORE ME] GitHub integration test
[ https://issues.apache.org/jira/browse/PROTON-830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14338206#comment-14338206 ] ASF GitHub Bot commented on PROTON-830: --- Github user gemmellr commented on the pull request: https://github.com/apache/qpid-proton/pull/9#issuecomment-76155471 Testing reply via GitHub [IGNORE ME] GitHub integration test --- Key: PROTON-830 URL: https://issues.apache.org/jira/browse/PROTON-830 Project: Qpid Proton Issue Type: Bug Reporter: Robbie Gemmell Assignee: Robbie Gemmell JIRA to perform some GitHub integration tests -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] qpid-proton pull request: PROTON-830: trivial README change, testi...
Github user gemmellr commented on the pull request: https://github.com/apache/qpid-proton/pull/9#issuecomment-76155471 Testing reply via GitHub --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: PROTON-827: Reactive client binding for the go programming language
Hi Alan, -Alan Conway acon...@redhat.com wrote: - I plan to start working on a go golang.org binding for proton. I envisage a SWIG binding similar to the other swig-based bindings (python, ruby, etc.) and an API layer similar to the new reactive Python API (based on the C reactor.) This will be an exploratory effort to begin with, I'd like to hear from anybody who might be interested in using such a thing or helping to implement it. This is certainly something I'd be interested in. However, as far as I was aware, the usefulness of SWIG for Go was where you needed to wrapper C++ libraries. If you're just planning on wrapping the proton-c reactor code, wouldn't we simply use cgo [1]? Cheers, Dom [1] http://golang.org/cmd/cgo/Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
Re: PROTON-827: Reactive client binding for the go programming language
http://golang.org/cmd/cgo/ :/ -- Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
[jira] [Created] (PROTON-830) [IGNORE ME] GitHub integration test
Robbie Gemmell created PROTON-830: - Summary: [IGNORE ME] GitHub integration test Key: PROTON-830 URL: https://issues.apache.org/jira/browse/PROTON-830 Project: Qpid Proton Issue Type: Bug Reporter: Robbie Gemmell Assignee: Robbie Gemmell JIRA to perform some GitHub integration tests -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [GitHub] qpid-proton pull request: PROTON-830: trivial README change, testi...
Testing responses via email. On 26 February 2015 at 10:25, gemmellr g...@git.apache.org wrote: GitHub user gemmellr opened a pull request: https://github.com/apache/qpid-proton/pull/9 PROTON-830: trivial README change, testing GitHub integration As per the subject. You can merge this pull request into a Git repository by running: $ git pull https://github.com/gemmellr/qpid-proton asf-github-integration Alternatively you can review and apply these changes as the patch at: https://github.com/apache/qpid-proton/pull/9.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #9 commit 78ce718dc6a7618baa502b7baef38a52b43bfacd Author: Robert Gemmell rob...@apache.org Date: 2015-02-26T10:22:49Z PROTON-830: trivial README change, testing GitHub integration --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: Proposed SASL changes (API and functional)
On 25 February 2015 at 18:40, Andrew Stitcher astitc...@redhat.com wrote: On Wed, 2015-02-25 at 10:27 +0100, Jakub Scholz wrote: ... But I find this part a bit dangerous: Classically in protocols where SASL was not optional the way to avoid double authentication was to use the EXTERNAL SASL mechanism. With AMQP, SASL is optional, so if SSL is used for client authentication the SASL layer could be entirely omitted and so using EXTERNAL is not necessary. This is really just a statement about how AMQP 1.0 works - if you like - it is an aside praising the good protocol design sense of the standard's authors (you know who you are!). I understand the idea and I would even agree that this is the proper way how to do it in the long term. But I'm not sure whether all brokers support this concept. For example, I'm not sure whether you can configure the Qpid C++ broker in a way to accept AMQP 1.0 connections with SSL Client Authentication without SASL EXTERNAL while at the same time accepting AMQP 0-10 connections only with SASL EXTERNAL. Therefore I would be afraid that allowing SSL Client Authentication only without SASL could cause some serious incompatibilities - I think both should be possible / supported. And both are supported. The qpidd 0-10 support is not going to change. The qpidd 1.0 support is on a different code path so there is little bleed over in functionality. The proton server code can auto detect which protocol layers the client is using, and subject to it being an allowed protocol configuration, authenticate it. Other AMQP 1.0 implementations may not support leaving out the SASL layer and so you can certainly always tell the client to use it (even if it adds no useful functionality as in the ANONYMOUS and EXTERNAL cases). So as far as the current plans for proton go if you require SSL client authentication it will happen whether or not a SASL layer is there. As EXTERNAL and better SSL integration with the transport code is not yet implemented there may be something significant I've missed in this analysis, in which case it's all subject to change! I hope that helps. I'm going to post my comments here and on the wiki, as I dont think many (except maybe you) will actually see them on the wiki ;) Assuming I've read the email thread correctly, you do plan on implementing EXTERNAL so that clients can be authenticated using SSL client auth + EXTERNAL. I think that is a good idea, as even though it can be ommitted not all brokers or clients will want to support doing so. There may also be cases where implementations handle the SSL by themselves and only want to do SASL via proton (albeit by reading/writing bytes currently). Possibly less so on the C side, but almost everyone seems to do that with Proton-J. Mentioning Proton-J, are there any plans there? The completion of the SASL stage only being indicated by an event, combined with removal of the old read/write sasl bytes methods, seems to end any prospect of not using the events if you want SASL (at least, not without moving to intercepting the raw SASL frames before the transport). Does this mark the end of the old non-events API? The method for excluding certain server mechanisms seems a bit odd to me. A method to configure the mechanisms to be used feels like it might be a better fit (either ignoring mechanism that arent installed, or throwing to indicate they arent present, or making that a choice), what do you think? How does pn_transport_require_auth interact with old 'allow skip' functionality that let transports with SASL configured permit non-SASL connections? The two have different defaults, as skipping wasnt allowed previously (hence the old method being added), whereas apparently it now would be by default. That itself feels a little wrong to me, I think people should have to opt-in to that behaviour as they did previously. You mention removing deprecated APIs in the form of pn_sasl_client/pn_sasl_server. I think its worth mentioning these were only to become deprecated in 0.9. Robbie