RE: Accessing queues with '/' in name in Rest API [qpid java broker 6.0.4]
Hello Rob, Olivier and I re-checked the global address domain feature and it seems it does not resolve the global addresses correctly. When I create the queue 'queueA' on the broker and I set the globalAddressDomains to '/domain/subdomain', and then I register a listener with JMS for the queue '/domain/subdomain/queueA' I get an 'amqp-not-found'. Is this expected? When I told you it worked, I think I had a zombie queue '/domain/subdomain/queueA' from my previous attempt to use '/' in queue names that made it "work" :-(. Thank you, Regards, Antoine -Original Message- From: Rob Godfrey [mailto:rob.j.godf...@gmail.com] Sent: jeudi 2 mars 2017 16:07 To: users@qpid.apache.org Subject: Re: Accessing queues with '/' in name in Rest API [qpid java broker 6.0.4] On 2 March 2017 at 15:11, Antoine Chevin <antoine.che...@gmail.com> wrote: > Thank you Rob for the very detailed answer. > I saw in the code > (org.apache.qpid.server.protocol.v1_0.Session_1_0#remoteLinkCreation)t > hat the exchange lookup is skipped if the address starts with a '/'. > I intend to use a '/' in the beginning because I don't want the > exchange lookup. > Do you think it is a good approach? > > So the intent here is that addresses that start with "/" are considered to be "global" addresses as previously described, addresses that start with "/" but match one of the gloabAddressDomains for the virtual host would route within the virtual host to the appropriate destination, names that begin with "/" but don't match one of the domains for the vhost would be sent via federation to a remote broker (when that code gets completed - obviously we don't have federation of that kind in the Java Broker currently). So having a name which begins with "/" may work right now, but it's reasonably likely it might break in the future. In general I would avoid "/" as well as "?", ";", ",", "[", "]", "|", "(", and ")" in queue names. Is the plan that all your queues will start with the same //... prefix, or will different queues have different prefixes? -- Rob > Thank you, > Regards, > Antoine > > -Original Message- > From: Rob Godfrey [mailto:rob.j.godf...@gmail.com] > Sent: jeudi 2 mars 2017 11:09 > To: users@qpid.apache.org > Subject: Re: Accessing queues with '/' in name in Rest API [qpid java > broker 6.0.4] > > On 2 March 2017 at 10:46, Antoine Chevin <antoine.che...@gmail.com> wrote: > > > Thank you Rob for the answer. Yes it really helps! > > I noticed that addresses in the form / > > are also used with AMQP 1-0. Is it expected? > > > > > It is part of how the Java Broker maps the AMQP 0-x > Exchange/Binding/Queue model into the AMQP 1.0 address space, yes. > > In short when the Java Broker receives a message to an address X it > first looks to see if there is an exchange X, then if there is a queue > X, then if X contains a / it looks to see if the part before the / is > an exchange name, and if so it sends to that exchange with the part > after the / being used as the routing key. > > When the Java Broker receives a request to consume from an address X > it first looks to see if there is a Queue X, then if there is an > Exchange X (in which case it creates a temporary queue and binds with > an empty binding key), and then if X contains a / and the part before > the X is an exchange name it will create a temporary queue and bind > that to the exchange with the binding key being the part of X after the /. > > Note the asymmetry on send and consume that on send it first looks for > an exchange and on consume it first looks for a queue. > > (There are a few more rules for the globalAddressDomains and for > system addresses like $management, but the above is the general rule). > > -- Rob > > > > Thank you, > > Regards, > > Antoine > > > > On 1 March 2017 at 20:25, Olivier Mallassi > > <olivier.malla...@gmail.com> > > wrote: > > > > > Rob, all > > > > > > Thank you rob for this. Could you please share more details > > > regarding not using the "/"? > > > > > > > > So there are a couple of reasons why I think not using a / makes sense: > > > > 1) Because of exactly the REST / encoding issue that you ran into - > > using characters that often need escaping can cause a lot of issues > > in config files, parameters etc... depending upon where the queue > > name might be used you may end up encoding that / one, two or even > > more times... this gets messy fast > > > > 2) Because in AMQP addressing we've been imaging the / as a > >
RE: Configuring addresses starting with '/' on qpid-dispatch router 0.7.0
Hello, Do you have an idea on the below behavior? Thank you, Regards, Antoine -Original Message- From: Antoine Chevin [mailto:antoine.che...@gmail.com] Sent: jeudi 2 mars 2017 10:43 To: users@qpid.apache.org Subject: Configuring addresses starting with '/' on qpid-dispatch router 0.7.0 Hello, I tried to configure addresses starting with a '/' but using qdstat I see that this '/' is removed. Is it expected? I noticed the same behavior with autolinks. Thank you, Regards, Antoine
RE: Accessing queues with '/' in name in Rest API [qpid java broker 6.0.4]
Hello Rob, We gave a try ot the globalAddressDomains and it works fine. I think we will use regular queue names and globalAddressDomains for our use case. Thank you! Do you know if the broker federation is planned for v7? Regards, Antoine -Original Message- From: Rob Godfrey [mailto:rob.j.godf...@gmail.com] Sent: jeudi 2 mars 2017 16:07 To: users@qpid.apache.org Subject: Re: Accessing queues with '/' in name in Rest API [qpid java broker 6.0.4] On 2 March 2017 at 15:11, Antoine Chevin <antoine.che...@gmail.com> wrote: > Thank you Rob for the very detailed answer. > I saw in the code > (org.apache.qpid.server.protocol.v1_0.Session_1_0#remoteLinkCreation)t > hat the exchange lookup is skipped if the address starts with a '/'. > I intend to use a '/' in the beginning because I don't want the > exchange lookup. > Do you think it is a good approach? > > So the intent here is that addresses that start with "/" are considered to be "global" addresses as previously described, addresses that start with "/" but match one of the gloabAddressDomains for the virtual host would route within the virtual host to the appropriate destination, names that begin with "/" but don't match one of the domains for the vhost would be sent via federation to a remote broker (when that code gets completed - obviously we don't have federation of that kind in the Java Broker currently). So having a name which begins with "/" may work right now, but it's reasonably likely it might break in the future. In general I would avoid "/" as well as "?", ";", ",", "[", "]", "|", "(", and ")" in queue names. Is the plan that all your queues will start with the same //... prefix, or will different queues have different prefixes? -- Rob > Thank you, > Regards, > Antoine > > -Original Message- > From: Rob Godfrey [mailto:rob.j.godf...@gmail.com] > Sent: jeudi 2 mars 2017 11:09 > To: users@qpid.apache.org > Subject: Re: Accessing queues with '/' in name in Rest API [qpid java > broker 6.0.4] > > On 2 March 2017 at 10:46, Antoine Chevin <antoine.che...@gmail.com> wrote: > > > Thank you Rob for the answer. Yes it really helps! > > I noticed that addresses in the form / > > are also used with AMQP 1-0. Is it expected? > > > > > It is part of how the Java Broker maps the AMQP 0-x > Exchange/Binding/Queue model into the AMQP 1.0 address space, yes. > > In short when the Java Broker receives a message to an address X it > first looks to see if there is an exchange X, then if there is a queue > X, then if X contains a / it looks to see if the part before the / is > an exchange name, and if so it sends to that exchange with the part > after the / being used as the routing key. > > When the Java Broker receives a request to consume from an address X > it first looks to see if there is a Queue X, then if there is an > Exchange X (in which case it creates a temporary queue and binds with > an empty binding key), and then if X contains a / and the part before > the X is an exchange name it will create a temporary queue and bind > that to the exchange with the binding key being the part of X after the /. > > Note the asymmetry on send and consume that on send it first looks for > an exchange and on consume it first looks for a queue. > > (There are a few more rules for the globalAddressDomains and for > system addresses like $management, but the above is the general rule). > > -- Rob > > > > Thank you, > > Regards, > > Antoine > > > > On 1 March 2017 at 20:25, Olivier Mallassi > > <olivier.malla...@gmail.com> > > wrote: > > > > > Rob, all > > > > > > Thank you rob for this. Could you please share more details > > > regarding not using the "/"? > > > > > > > > So there are a couple of reasons why I think not using a / makes sense: > > > > 1) Because of exactly the REST / encoding issue that you ran into - > > using characters that often need escaping can cause a lot of issues > > in config files, parameters etc... depending upon where the queue > > name might be used you may end up encoding that / one, two or even > > more times... this gets messy fast > > > > 2) Because in AMQP addressing we've been imaging the / as a > > separator when using some sort of topological address scheme for > > addressing in federated networks... for instance you might have a > > queue for orders in you dongle department of your widget division of > > your company foo.com... and you might expose that address as > > //foo.com/widget/dong
Re: Accessing queues with '/' in name in Rest API [qpid java broker 6.0.4]
Thank you Rob for the very detailed answer. I saw in the code (org.apache.qpid.server.protocol.v1_0.Session_1_0#remoteLinkCreation)that the exchange lookup is skipped if the address starts with a '/'. I intend to use a '/' in the beginning because I don't want the exchange lookup. Do you think it is a good approach? Thank you, Regards, Antoine -Original Message- From: Rob Godfrey [mailto:rob.j.godf...@gmail.com] Sent: jeudi 2 mars 2017 11:09 To: users@qpid.apache.org Subject: Re: Accessing queues with '/' in name in Rest API [qpid java broker 6.0.4] On 2 March 2017 at 10:46, Antoine Chevin <antoine.che...@gmail.com> wrote: > Thank you Rob for the answer. Yes it really helps! > I noticed that addresses in the form / are > also used with AMQP 1-0. Is it expected? > > It is part of how the Java Broker maps the AMQP 0-x Exchange/Binding/Queue model into the AMQP 1.0 address space, yes. In short when the Java Broker receives a message to an address X it first looks to see if there is an exchange X, then if there is a queue X, then if X contains a / it looks to see if the part before the / is an exchange name, and if so it sends to that exchange with the part after the / being used as the routing key. When the Java Broker receives a request to consume from an address X it first looks to see if there is a Queue X, then if there is an Exchange X (in which case it creates a temporary queue and binds with an empty binding key), and then if X contains a / and the part before the X is an exchange name it will create a temporary queue and bind that to the exchange with the binding key being the part of X after the /. Note the asymmetry on send and consume that on send it first looks for an exchange and on consume it first looks for a queue. (There are a few more rules for the globalAddressDomains and for system addresses like $management, but the above is the general rule). -- Rob > Thank you, > Regards, > Antoine > > On 1 March 2017 at 20:25, Olivier Mallassi > <olivier.malla...@gmail.com> > wrote: > > > Rob, all > > > > Thank you rob for this. Could you please share more details > > regarding not using the "/"? > > > > > So there are a couple of reasons why I think not using a / makes sense: > > 1) Because of exactly the REST / encoding issue that you ran into - > using characters that often need escaping can cause a lot of issues in > config files, parameters etc... depending upon where the queue name > might be used you may end up encoding that / one, two or even more > times... this gets messy fast > > 2) Because in AMQP addressing we've been imaging the / as a separator > when using some sort of topological address scheme for addressing in > federated networks... for instance you might have a queue for orders > in you dongle department of your widget division of your company > foo.com... and you might expose that address as > //foo.com/widget/dongle/orders whereas someone connected directly to the broker would just see the queue as "orders" > (though they could also address it by its full "global" name). The > Java Broker already makes some allowance for this with the notion of > "globalAddressDomains" which you can set on the virtual host. For any > domain in the list of defined globalAddressDomains, the > virtualhost will accept messages sent /M as if it were sent to M > (and the same with consuming). > > Also note that for the Java Broker an address of the form name>/ can be used to send / receive via AMQP 0-x > exchange/routing-key semantics. > > Hope this helps, > Rob > > > > On our side we are using amqp 1.0 that, AFAIU, promotes the "complex" > > addressing plans > > The benefit for us would be > > - alignements between our http and amqp naming conventions. It is a > > nice to have but can help lisibility > > - use "URL" to route messages. Like the samples with the > > linkroutepattern > > > > Not sure these are good ideas btw. Any feedback is welcomed > > > > Regards > > > > On Wed, 1 Mar 2017 at 18:16, Rob Godfrey <rob.j.godf...@gmail.com> > wrote: > > > > > In general I'd advise against using the '/' character in queue > > > names if possible... however if you must, then you need double > > > encode the name, so "a/b" would become "a%252Fb" > > > > > > Hope this helps, > > > Rob > > > > > > On 1 March 2017 at 17:31, Antoine Chevin > > > <antoine.che...@gmail.com> > > wrote: > > > > > > > Hello, > > > > > > > > I created a queue with a '/' in the name. How can I access it in > > > > the > > rest > > > > api? > > > > I tried to encode the '/' with %2F but I still get a 422 "too > > > > many > > > entries > > > > in path for REST servlet queue." > > > > Can you please help? > > > > > > > > Regards, > > > > Antoine > > > > > > > > > >
Re: Accessing queues with '/' in name in Rest API [qpid java broker 6.0.4]
Thank you Rob for the answer. Yes it really helps! I noticed that addresses in the form / are also used with AMQP 1-0. Is it expected? Thank you, Regards, Antoine On 1 March 2017 at 20:25, Olivier Mallassi <olivier.malla...@gmail.com> wrote: > Rob, all > > Thank you rob for this. Could you please share more details regarding > not using the "/"? > > So there are a couple of reasons why I think not using a / makes sense: 1) Because of exactly the REST / encoding issue that you ran into - using characters that often need escaping can cause a lot of issues in config files, parameters etc... depending upon where the queue name might be used you may end up encoding that / one, two or even more times... this gets messy fast 2) Because in AMQP addressing we've been imaging the / as a separator when using some sort of topological address scheme for addressing in federated networks... for instance you might have a queue for orders in you dongle department of your widget division of your company foo.com... and you might expose that address as //foo.com/widget/dongle/orders whereas someone connected directly to the broker would just see the queue as "orders" (though they could also address it by its full "global" name). The Java Broker already makes some allowance for this with the notion of "globalAddressDomains" which you can set on the virtual host. For any domain in the list of defined globalAddressDomains, the virtualhost will accept messages sent /M as if it were sent to M (and the same with consuming). Also note that for the Java Broker an address of the form / can be used to send / receive via AMQP 0-x exchange/routing-key semantics. Hope this helps, Rob > On our side we are using amqp 1.0 that, AFAIU, promotes the "complex" > addressing plans > The benefit for us would be > - alignements between our http and amqp naming conventions. It is a > nice to have but can help lisibility > - use "URL" to route messages. Like the samples with the > linkroutepattern > > Not sure these are good ideas btw. Any feedback is welcomed > > Regards > > On Wed, 1 Mar 2017 at 18:16, Rob Godfrey <rob.j.godf...@gmail.com> wrote: > > > In general I'd advise against using the '/' character in queue names > > if possible... however if you must, then you need double encode the > > name, so "a/b" would become "a%252Fb" > > > > Hope this helps, > > Rob > > > > On 1 March 2017 at 17:31, Antoine Chevin <antoine.che...@gmail.com> > wrote: > > > > > Hello, > > > > > > I created a queue with a '/' in the name. How can I access it in > > > the > rest > > > api? > > > I tried to encode the '/' with %2F but I still get a 422 "too many > > entries > > > in path for REST servlet queue." > > > Can you please help? > > > > > > Regards, > > > Antoine > > > > > >
Configuring addresses starting with '/' on qpid-dispatch router 0.7.0
Hello, I tried to configure addresses starting with a '/' but using qdstat I see that this '/' is removed. Is it expected? I noticed the same behavior with autolinks. Thank you, Regards, Antoine
Accessing queues with '/' in name in Rest API [qpid java broker 6.0.4]
Hello, I created a queue with a '/' in the name. How can I access it in the rest api? I tried to encode the '/' with %2F but I still get a 422 "too many entries in path for REST servlet queue." Can you please help? Regards, Antoine
Delivery settlement using Qpid proton 0.16.0 C++ bindings
Hello, We are experimenting the delivery settlement using Qpid proton C++ bindings. Is there a way to find out if the delivery is remotely settled? In our case, we want to implement client acknowledgment of a message like it is done in JMS. We want to be sure that, when our ‘acknowledge’ method returns, the delivery is actually settled. To perform the client acknowledgment, we store a copy of the delivery object and call ‘accept()’ on it when the client acknowledges. We tried to use delivery.settled() to verify if the delivery is remotely settled but it returns always false. Do you know why? Thank you in advance, Regards, Antoine
Http management port already in use for Qpid Java Broker 6.0.4
Hello, I noticed that when starting the Qpid Java Broker with the management port already in use, it logs that it could not start the service and then that the broker is ready. I've read https://issues.apache.org/jira/browse/QPID-6096 and I'm wondering why the http management port isn't a special case? How can it be restarted without a broker restart? Thanks in advance, Regards, Antoine
RE: Implementing consumer broadcast with a single queue with Qpid Java Broker 6.0.4
Hello Rob, You're right, there was something strange in my question. I realized that it was not clear for me neither. I explored different strategies for the consumer broadcast. One was indeed to have the consumer to receive all the messages from the moment it connects and not keep the messages on the broker when all the consumers received them. Using a queue configured with "ensureNondestructiveConsumers", a replay period of 0 and a maxTtl of 10 seconds worked for me. Thank you for the answer, Regards, Antoine -Original Message- From: Rob Godfrey [mailto:rob.j.godf...@gmail.com] Sent: lundi 9 janvier 2017 15:40 To: users@qpid.apache.org Subject: Re: Implementing consumer broadcast with a single queue with Qpid Java Broker 6.0.4 Hi Antoine, I'm not sure I'm totally clear on what you mean by "I cannot lose messages" in this context. Are you saying that if a consumer is connected, then it should receive all messages which arrive on the "topic" from the point at which it connects; but once all consumers have seen a message (or if there are no consumers connected) it is OK for the message to be deleted? -- Rob On 9 January 2017 at 14:34, Antoine Chevin <antoine.che...@gmail.com> wrote: > Hello, > > I analyzed if it is possible to broadcast messages using a single > queue. I know I could make the consumers listen to a topic but I > noticed that a temporary queue is created on the broker and I'm afraid > of the broker performance if hundreds of consumers listen to the topic. > > So far, I managed to broadcast in 2 ways: > - configure a queue on the broker configured with > "ensureNondestructiveConsumers" and a default filter with a replay > period to not receive all the message when a consumer connect. The > problem is that the messages on the queue are never cleared. A TTL can > solve that but I cannot lose messages. > - configure the dispatch router 0.6.1 to listen to the broker and > expose an address with the distribution "multicast". This works very well. > > To cover use cases where only the Qpid java broker can be used, I > wonder if you know other solutions to broadcast a message to consumers > using a single queue? > > Thanks in advance, > Best regards, > Antoine >
RE: Qpid java broker 6.0.4 - JDBC message store performance issues
Thank you for the reply. Yes, it really helped. I spent more time investigating the metrics and found out that the throughput is lower in my benchmark as the same time as the latency is higher. The higher latency can be explained with the network indirection. Each producer sends the messages sequentially, so the increased latency will directly affect the throughput. I reran the benchmark with 8 producers (instead of 4) and obtained throughput results that were closer to the one I used to have (BDB: 5.4K msg/s JDBC: 4.9K msg/s). One question remains, I tried to configure the connection pool (Bonecp) but did not manage to change the partitionCount, maxConnectionsPerPartition or minConnectionsPerPartition. I could not find any documentation for that. Do you know how I can set those values? Thank you, Regards, Antoine -Original Message- From: Rob Godfrey [mailto:rob.j.godf...@gmail.com] Sent: lundi 9 janvier 2017 15:49 To: users@qpid.apache.org Subject: Re: Qpid java broker 6.0.4 - JDBC message store performance issues Just to cover this part: *I don't know the code in details but is there a reason for not using a single connection for the JDBC message store lifecycle?* Since SQL/JDBC can have at most one open transaction for any given connection, we would have to have (for AMQP 0-x) one JDBC connection open per session (i.e. potentially multiple per AMQP connection). This would likely lead to vastly more connections to the database being opened than would be necessary. For AMQP 1.0 the situation is worse since the protocol allows multiple open transactions. In practice it makes more sense for us to use a pool of connections and to pull a connection out of the pool when we want to begin transactional work. (Also not that even if you are not using transactions in AMQP, we need to use them at the store level - if a message is published to an exchange and is routed to multiple queues, this must happen atomically. Similarly if you acknowledge multiple messages in a single command, this must happen in a database txn. Hope this helps, Rob On 9 January 2017 at 14:13, Antoine Chevin <antoine.che...@gmail.com> wrote: > Hello, > > Thank you for your answers. It's a lot clearer for me how the JDBC > behaves now. You can find in blue the answers to your questions below. > > Regards, > Antoine > > -Original Message- > From: Rob Godfrey [mailto:rob.j.godf...@gmail.com] > Sent: vendredi 6 janvier 2017 15:52 > To: users@qpid.apache.org > Subject: Re: Qpid java broker 6.0.4 - JDBC message store performance > issues > > On 6 January 2017 at 14:06, Lorenz Quack <quack.lor...@gmail.com> wrote: > > > Hello Antoine, > > > > Yes, it is expected that a new Connection is made for each enqueue > > and dequeue. > > The relevant code is org.apache.qpid.server.store.A > > bstractJDBCMessageStore#getConnection which is called from multiple > > places. > > > > > To be clear - the semantic behaviour is that each "transaction" gets a > new connection ... if you use transactions in the client to combine > multiple enqueues/dequeues then this will all happen on one connection. > > *I don't know the code in details but is there a reason for not using > a single connection for the JDBC message store lifecycle?* > > Obviously this strategy doesn't work well if you are actually opening > a new TCP connection each time (for in-memory Derby, which the code > was originally written for, it doesn't matter too much if Qpid isn't > pooling in any way, since the connections don't have much overhead), > so using a connection caching provider is pretty much mandatory if you > want to use an external RDBMS. > > -- Rob > > > > > We do our performance testing using the BDB store. We do not > > performance test other store types. > > Therefore, it is possible that the JDBC path is not as well tuned as > > the BDB one. > > I do not know of any obvious performance bottle necks in the JDBC > > code (if they were obvious we would have probably fixed them). > > I currently do not have the capacity to investigate this but feel > > free to investigate yourself and ideally provide a patch :) > > *I'm in the analysis phase but performance will be in the critical > path and I'll be happy to contribute :-)* > > > > When it comes to performance your exact setup and configuration is > > important. > > To compare local BDB with over the network JDBC is dubious. > > Here are a couple of things to always consider when investigating > > performance > > * Are you using persistent or transient messages? > *The messages are set to be persistent and the broker queue is > durable.* > > * Are you using transacted or auto-ack sessions? > *It's an AUTO_ACK sessi
Implementing consumer broadcast with a single queue with Qpid Java Broker 6.0.4
Hello, I analyzed if it is possible to broadcast messages using a single queue. I know I could make the consumers listen to a topic but I noticed that a temporary queue is created on the broker and I'm afraid of the broker performance if hundreds of consumers listen to the topic. So far, I managed to broadcast in 2 ways: - configure a queue on the broker configured with "ensureNondestructiveConsumers" and a default filter with a replay period to not receive all the message when a consumer connect. The problem is that the messages on the queue are never cleared. A TTL can solve that but I cannot lose messages. - configure the dispatch router 0.6.1 to listen to the broker and expose an address with the distribution "multicast". This works very well. To cover use cases where only the Qpid java broker can be used, I wonder if you know other solutions to broadcast a message to consumers using a single queue? Thanks in advance, Best regards, Antoine
Re: Qpid java broker 6.0.4 - JDBC message store performance issues
Hello, Thank you for your answers. It's a lot clearer for me how the JDBC behaves now. You can find in blue the answers to your questions below. Regards, Antoine -Original Message- From: Rob Godfrey [mailto:rob.j.godf...@gmail.com] Sent: vendredi 6 janvier 2017 15:52 To: users@qpid.apache.org Subject: Re: Qpid java broker 6.0.4 - JDBC message store performance issues On 6 January 2017 at 14:06, Lorenz Quack <quack.lor...@gmail.com> wrote: > Hello Antoine, > > Yes, it is expected that a new Connection is made for each enqueue and > dequeue. > The relevant code is org.apache.qpid.server.store.A > bstractJDBCMessageStore#getConnection which is called from multiple > places. > > To be clear - the semantic behaviour is that each "transaction" gets a new connection ... if you use transactions in the client to combine multiple enqueues/dequeues then this will all happen on one connection. *I don't know the code in details but is there a reason for not using a single connection for the JDBC message store lifecycle?* Obviously this strategy doesn't work well if you are actually opening a new TCP connection each time (for in-memory Derby, which the code was originally written for, it doesn't matter too much if Qpid isn't pooling in any way, since the connections don't have much overhead), so using a connection caching provider is pretty much mandatory if you want to use an external RDBMS. -- Rob > We do our performance testing using the BDB store. We do not > performance test other store types. > Therefore, it is possible that the JDBC path is not as well tuned as > the BDB one. > I do not know of any obvious performance bottle necks in the JDBC code > (if they were obvious we would have probably fixed them). > I currently do not have the capacity to investigate this but feel free > to investigate yourself and ideally provide a patch :) *I'm in the analysis phase but performance will be in the critical path and I'll be happy to contribute :-)* > > When it comes to performance your exact setup and configuration is > important. > To compare local BDB with over the network JDBC is dubious. > Here are a couple of things to always consider when investigating > performance > * Are you using persistent or transient messages? *The messages are set to be persistent and the broker queue is durable.* > * Are you using transacted or auto-ack sessions? *It's an AUTO_ACK session* > * Are the messages published sequentially by a single producer or > multiple producers in parallel? *4 producers are publishing the messages in parallel (they are 4 processes)* >If you are publishing/consuming in parallel you might want to try > tuning the connection pool size. *Currently, I'm using the default (4 partitions count, 10 connections per partition max). I tried to change these options in the configuration but did not succeed.I tried to set for instance { partitionCount: 2, minConnectionsPerPartition: 1, maxConnectionsPerPartition: 4 } in the VirtualHost configuration.* *I looked at the code and tried as well: **qpid.jdbcstore.bonecp.maxConnectionsPerPartition instead of maxConnectionsPerPartition (same for min).* *Do you know how I can change the BoneCP configuration?* > * Are you consuming at the same time or are you first publishing and > then consuming? *Publishing and consuming at the same time.* > * Are the messages being flown to disk (check broker logs for BRK-1014)? *I did not see this log and I think there is enough memory (16g)* >This might happen in low memory conditions and is detrimental to > performance because message need to be reloaded from disk. > * Is the broker enforcing producer side flow control? *I don't think so but how can I check? Should I see a log too?* >This might happen when running out of disk space and is obviously > detrimental to performance. > * ... > > > I hope this somewhat helps with your investigation. > > Kind regards, > Lorenz > > > > > On 05/01/17 13:45, Antoine Chevin wrote: > >> Hello, >> >> I ran a benchmark using Qpid java broker 6.0.4 and the JDBC message >> store with an Oracle database. >> I tried to send and read 1,000,000 messages to the broker but was not >> able to finish the benchmark as there was a StoreException caused by >> a java.net.ConnectException (full stack is attached). >> >> I suspected a very high number of connections. >> >> I tried using JDBC with BoneCP and the benchmark finished. I could >> get the BoneCP statistics and for 1,000,000 messages, there were >> 3,000,000 DB connections requested. >> >> It looks like the broker requests a connection when enqueuing and >> dequeuing the message with the JDBC store. Is it a normal behavior? >> >> Also
Qpid java broker 6.0.4 - JDBC message store performance issues
Hello, I ran a benchmark using Qpid java broker 6.0.4 and the JDBC message store with an Oracle database. I tried to send and read 1,000,000 messages to the broker but was not able to finish the benchmark as there was a StoreException caused by a java.net.ConnectException (full stack is attached). I suspected a very high number of connections. I tried using JDBC with BoneCP and the benchmark finished. I could get the BoneCP statistics and for 1,000,000 messages, there were 3,000,000 DB connections requested. It looks like the broker requests a connection when enqueuing and dequeuing the message with the JDBC store. Is it a normal behavior? Also, the benchmark showed that the JDBC store with Oracle was slower than the BDB store. (an average throughput of 2.8K msg/s vs 5.4K msg/s). I suspected a degradation as the Oracle store is located on a separate machine and the broker goes over the network to persist the messages. But not that much. Do you know if there is a possible improvement in the JDBC message store code to narrow the gap? Thank you in advance, Best regards, Antoine # # Unhandled Exception org.apache.qpid.server.store.StoreException: java.sql.SQLException: The Network Adapter could not establish the connection in Thread virtualhost-default-pool-1 # # Exiting # org.apache.qpid.server.store.StoreException: java.sql.SQLException: The Network Adapter could not establish the connection at org.apache.qpid.server.store.AbstractJDBCMessageStore$JDBCTransaction.(AbstractJDBCMessageStore.java:1120) at org.apache.qpid.server.store.jdbc.GenericAbstractJDBCMessageStore$RecordedJDBCTransaction.(GenericAbstractJDBCMessageStore.java:118) at org.apache.qpid.server.store.jdbc.GenericAbstractJDBCMessageStore$RecordedJDBCTransaction.(GenericAbstractJDBCMessageStore.java:114) at org.apache.qpid.server.store.jdbc.GenericAbstractJDBCMessageStore.newTransaction(GenericAbstractJDBCMessageStore.java:110) at org.apache.qpid.server.txn.AutoCommitTransaction.dequeue(AutoCommitTransaction.java:85) at org.apache.qpid.server.queue.AbstractQueue.dequeueEntry(AbstractQueue.java:1926) at org.apache.qpid.server.queue.AbstractQueue.dequeueEntry(AbstractQueue.java:1921) at org.apache.qpid.server.queue.AbstractQueue.checkMessageStatus(AbstractQueue.java:2521) at org.apache.qpid.server.virtualhost.AbstractVirtualHost$VirtualHostHouseKeepingTask.execute(AbstractVirtualHost.java:1284) at org.apache.qpid.server.virtualhost.HouseKeepingTask$1.run(HouseKeepingTask.java:65) at java.security.AccessController.doPrivileged(Native Method) at org.apache.qpid.server.virtualhost.HouseKeepingTask.run(HouseKeepingTask.java:60) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.sql.SQLException: The Network Adapter could not establish the connection at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:412) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:531) at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:221) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:503) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:247) at org.apache.qpid.server.store.jdbc.DefaultConnectionProvider.getConnection(DefaultConnectionProvider.java:49) at org.apache.qpid.server.store.jdbc.GenericJDBCMessageStore.getConnection(GenericJDBCMessageStore.java:121) at org.apache.qpid.server.store.AbstractJDBCMessageStore.newConnection(AbstractJDBCMessageStore.java:544) at org.apache.qpid.server.store.AbstractJDBCMessageStore$JDBCTransaction.(AbstractJDBCMessageStore.java:1116) ... 18 more Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:359) at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:422) at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:672) at
RE: [Proton-c 0.14.0][Visual Studio 2013] Failing ssl unit test only in Debug mode
Hello, We tried to investigate more about the problem today. >From what we understood from the ssl.exe test code: The test throws an exception on the on_transport_error() because the certificate is wrong. This triggers the destruction of the objects on the stack. Apparently, there is a memory corruption in iocpdesc_t structure: We are calling (selector)->triggered_list_tail->triggered_list_next while (selector)->triggered_list_tail is null [in triggered_list_add() in selector.c line 300]. Therefore we have the crash. Can you help us find more info about the bug? The crash is very low level and we have not very much experience in the proton-c layer... Thank you, Regards, Antoine -Original Message- From: Adel Boutros [mailto:adelbout...@live.com] Sent: lundi 17 octobre 2016 19:30 To: users@qpid.apache.org Subject: [Proton-c 0.14.0][Visual Studio 2013] Failing ssl unit test only in Debug mode Hello, We are compiling Proton-c 0.14.0 and its C++ bindings in 2 modes: RelWithDebInfo and Debug. For the RelWithDebInfo mode, all tests are green. For the Debug mode, we have a ssl test failing. We are using OpenSSL 1.0.2h. Swing and Cyrus are disabled (not found). Can you please help us find the issue? Test output -- 18: == 18: ERROR: test_ssl_bad_name (__main__.ContainerExampleTest) 18: -- 18: Traceback (most recent call last): 18: File "PATH_TO_PROTON_SOURCE_CODE/examples/cpp/example_test.py", line 375, in test_ssl_bad_name 18: out = self.proc(["ssl", "-a", addr, "-c", self.ssl_certs_dir(), "-v", "fail"], skip_valgrind=True).wait_exit() 18: File "PATH_TO_PROTON_SOURCE_CODE/examples/cpp/example_test.py", line 180, in wait_exit 18: self.check_() 18: File "PATH_TO_PROTON_SOURCE_CODE/examples/cpp/example_test.py", line 164, in check_ 18: raise self.error 18: ProcError: ['ssl.exe', '-a', 'amqps://127.0.0.1:12202/examples', '-c', 'PATH_TO_PROTON_SOURCE_CODE\\examples/cpp/ssl_certs', '-v', 'fail'] non-0 exit, code=255 18: 18: certificate verification failed for host wrong_name_for_server 18: : The target principal name is incorrect. 18: Command -- build_dir\Debug\examples\cpp\Debug\ssl.exe -c PATH_TO_PROTON_SOURCE_CODE\\examples/cpp/ssl_certs -v fail Output - certificate verification failed for host wrong_name_for_server : The target principal name is incorrect. Exception in Visual Studio 2013 Unhandled exception at 0x07FB345782B2 (qpid-protond.dll) in ssl.exe: 0xC005: Access violation reading location 0x Stack -- > qpid-protond.dll!triggered_list_add(pn_selector_t * selector, iocpdesc_t * iocpd) Line 300 C++ qpid-protond.dll!pni_events_update(iocpdesc_t * iocpd, int events) Line 324 C++ qpid-protond.dll!complete_read(read_result_t * result, unsigned long xfer_count, HRESULT status) Line 666 C++ qpid-protond.dll!complete(iocp_result_t * result, bool success, unsigned long num_transferred) Line 868 C++ qpid-protond.dll!pni_iocp_drain_completions(iocp_t * iocp) Line 888 C++ qpid-protond.dll!iocp_map_close_all(iocp_t * iocp) Line 1044 C++ qpid-protond.dll!pni_iocp_finalize(void * obj) Line 1151 C++ qpid-protond.dll!pn_class_decref(const pn_class_t * clazz, void * object) Line 98 C++ qpid-protond.dll!pn_class_free(const pn_class_t * clazz, void * object) Line 120 C++ qpid-protond.dll!pn_free(void * object) Line 264 C++ qpid-protond.dll!pn_io_finalize(void * obj) Line 95 C++ qpid-protond.dll!pn_class_decref(const pn_class_t * clazz, void * object) Line 98 C++ qpid-protond.dll!pn_decref(void * object) Line 254 C++ qpid-protond.dll!pn_reactor_finalize(pn_reactor_t * reactor) Line 100 C++ qpid-protond.dll!pn_reactor_finalize_cast(void * object) Line 106 C++ qpid-protond.dll!pn_class_decref(const pn_class_t * clazz, void * object) Line 98 C++ qpid-protond.dll!pn_decref(void * object) Line 254 C++ qpid-proton-cppd.dll!proton::internal::pn_ptr_base::decref(void * p) Line 32 C++ qpid-proton-cppd.dll!proton::internal::pn_ptr::~pn_ptr() Line 55 C++ [External Code] qpid-proton-cppd.dll!proton::container_impl::~container_impl() Line 160 C++ [External Code] ssl.exe!hello_world_direct::on_transport_error(proton::transport & t) Line 134 C++ qpid-proton-cppd.dll!proton::messaging_adapter::on_transport_closed(proton::proton_event & pe) Line 303 C++ qpid-proton-cppd.dll!proton::proton_event::dispatch(proton::proton_handler & handler) Line 74 C++ qpid-proton-cppd.dll!proton::handler_context::dispatch(pn_handler_t * c_handler, pn_event_t * c_event, pn_event_type_t __formal) Line 74 C++ qpid-protond.dll!pn_handler_dispatch(pn_handler_t * handler, pn_event_t * event, pn_event_type_t type) Line 104 C++
Re: Testing failover on dispatcher/java-broker cluster
Hi Ted, Do you have any insights into that problem? Thanks, Antoine > Hi Ted, > > You’re right, the connection close looked strange before stopping of the broker. I manually added the annotation (# stopping the broker) and was wrong about the position of this one. I replayed the test and the connection close happens *after* the broker stop. I assume it is the broker that initiates it. > > I found something interesting. In my test, I always sent a message when the broker is down, expecting to get a JmsSendTimedOutException (waiting for the disposition frame). I assumed this was harmless. But it turns out this is not. When I don’t do that, I can send a message after the broker restart. So to sum up the experiment I did: > > * I use Wireshark between the JMS client and the dispatcher. * > > 1) Using JMS I establish a connection to the dispatcher and create a > message producer (Wireshark: connection open -> attach) > 2) I’m able to send a message to the broker through the dispatcher ( > Wireshark: transfer -> disposition) > 3) I stop the broker > 4) With the same link, I send a message and I get a > JmsSendTimedOutException (waiting for the disposition frame) (Wireshark: > transfer) > 5) I restart the broker > 6) With the same link, I try to send a message and I get a > JmsSendTimedOutException for the same reason (waiting for the disposition > frame) (Wireshark: transfer) > > If I skip step (4), I cannot reproduce step (6) and my messages arrive > (Wireshark: transfer -> disposition) to the restarted broker. > > I hope it makes it clearer for you. Sorry for my rookie mistakes :-). > > Note: My colleague and I ran a small experiment to identify if the problem comes from JMS or the AMQP protocol. He changed the code of the java broker to not send the disposition frame one time out of two. > > We got these results: > > * I use Wireshark between the JMS client and the patched broker. * > > 1) Using JMS I establish a connection to the patched broker and create a message producer (Wireshark: connection open -> attach) > 2) I send a message to the broker and it replies with the disposition frame (Wireshark: transfer -> disposition) > 3) I send a message to the broker which drops the disposition frame. I get a send timeout in JMS (Wireshark: transfer) > 2) I send a message to the broker and it replies with the disposition frame > (Wireshark: transfer -> disposition). It works fine. > > We assume that there is something going on in the dispatcher. > > > Thanks, > Antoine
Re: Testing failover on dispatcher/java-broker cluster
Hi Ted, You’re right, the connection close looked strange before stopping of the broker. I manually added the annotation (# stopping the broker) and was wrong about the position of this one. I replayed the test and the connection close happens *after* the broker stop. I assume it is the broker that initiates it. I found something interesting. In my test, I always sent a message when the broker is down, expecting to get a JmsSendTimedOutException (waiting for the disposition frame). I assumed this was harmless. But it turns out this is not. When I don’t do that, I can send a message after the broker restart. So to sum up the experiment I did: * I use Wireshark between the JMS client and the dispatcher. * 1) Using JMS I establish a connection to the dispatcher and create a message producer (Wireshark: connection open -> attach) 2) I’m able to send a message to the broker through the dispatcher ( Wireshark: transfer -> disposition) 3) I stop the broker 4) With the same link, I send a message and I get a JmsSendTimedOutException (waiting for the disposition frame) (Wireshark: transfer) 5) I restart the broker 6) With the same link, I try to send a message and I get a JmsSendTimedOutException for the same reason (waiting for the disposition frame) (Wireshark: transfer) If I skip step (4), I cannot reproduce step (6) and my messages arrive (Wireshark: transfer -> disposition) to the restarted broker. I hope it makes it clearer for you. Sorry for my rookie mistakes :-). Note: My colleague and I ran a small experiment to identify if the problem comes from JMS or the AMQP protocol. He changed the code of the java broker to not send the disposition frame one time out of two. We got these results: * I use Wireshark between the JMS client and the patched broker. * 1) Using JMS I establish a connection to the patched broker and create a message producer (Wireshark: connection open -> attach) 2) I send a message to the broker and it replies with the disposition frame (Wireshark: transfer -> disposition) 3) I send a message to the broker which drops the disposition frame. I get a send timeout in JMS (Wireshark: transfer) 2) I send a message to the broker and it replies with the disposition frame (Wireshark: transfer -> disposition). It works fine. We assume that there is something going on in the dispatcher. Thanks, Antoine
Testing failover on dispatcher/java-broker cluster
Hello Qpid community, I’m testing the resilience of a dispatcher/broker infrastructure and I noticed the following behavior: I run a test with one JMS client connected to a dispatcher, which is connected to a broker. 1) Using JMS I establish a connection to the dispatcher and create a message producer 2) I’m able to send a message to the broker through the dispatcher 3) I stop and restart the broker 4) I cannot send any messages using the message producer I created before. 5) If a recreate a MessageProducer (new AMQP link), the message arrives to the broker In the failing scenario 4, I noticed using Wireshark that the dispatcher does not send any messages to the broker. So I deduced that the broker is not responsible for this behavior. *Is it an expected behavior? What can I change in the dispatcher/JMS configuration to avoid the failure?* You can find attached the Wireshark logs I produced from this experiment: - JMS – dispatcher – reuse sender: logs between JMS and the dispatcher when I reuse the message producer after the restart - JMS – dispatcher – new sender: logs between JMS and the dispatcher when I create a new message producer after the restart - dispatcher – broker – reuse sender: logs between the dispatcher and the broker, I reuse the message producer - dispatcher – broker – reuse sender: logs between the dispatcher and the broker, I create a new message producer I’m using qpid-dispatch 0.6.0, JMS 0.9.0 and qpid-java-broker 6.0.1. Thanks, Best regards, Antoine src.ip = JMS ip dst.ip = dispatcher ip # Client connection SourceDestinationProtocol Length Info src.ip dst.ipTCP 66 53505 â 5672 [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=2 SACK_PERM=1 dst.ipsrc.ip TCP 66 5672 â 53505 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 SACK_PERM=1 WS=128 src.ip dst.ipTCP 54 53505 â 5672 [ACK] Seq=1 Ack=1 Win=65536 Len=0 src.ip dst.ipAMQP 62 Protocol-Header 1-0-0 dst.ipsrc.ip TCP 60 5672 â 53505 [ACK] Seq=1 Ack=9 Win=29312 Len=0 dst.ipsrc.ip AMQP 105Protocol-Header 1-0-0 sasl.mechanisms src.ip dst.ipAMQP 91 sasl.init dst.ipsrc.ip TCP 60 5672 â 53505 [ACK] Seq=52 Ack=46 Win=29312 Len=0 dst.ipsrc.ip AMQP 76 sasl.outcome src.ip dst.ipAMQP 300Protocol-Header 1-0-0 open dst.ipsrc.ip AMQP 185Protocol-Header 1-0-0 open # Creating the Session src.ip dst.ipAMQP 86 begin dst.ipsrc.ip AMQP 86 begin src.ip dst.ipAMQP 86 begin dst.ipsrc.ip AMQP 86 begin # Creating MessageProducer src.ip dst.ipAMQP 313attach dst.ipsrc.ip AMQP 374attach flow # Sending a message (success) src.ip dst.ipAMQP 405transfer dst.ipsrc.ip AMQP 131flow disposition src.ip dst.ipTCP 54 53505 â 5672 [ACK] Seq=966 Ack=666 Win=64870 Len=0 src.ip dst.ipAMQP 62 (empty) dst.ipsrc.ip TCP 60 5672 â 53505 [ACK] Seq=666 Ack=974 Win=32512 Len=0 src.ip dst.ipAMQP 62 (empty) dst.ipsrc.ip TCP 60 5672 â 53505 [ACK] Seq=666 Ack=982 Win=32512 Len=0 dst.ipsrc.ip AMQP 62 (empty) # Stopping broker src.ip dst.ipTCP 54 53505 â 5672 [ACK] Seq=982 Ack=674 Win=64862 Len=0 src.ip dst.ipAMQP 62 (empty) dst.ipsrc.ip TCP 60 5672 â 53505 [ACK] Seq=674 Ack=990 Win=32512 Len=0 # Trying to send a message (timeout) src.ip dst.ipAMQP 406transfer dst.ipsrc.ip TCP 60 5672 â 53505 [ACK] Seq=674 Ack=1342 Win=33536 Len=0 src.ip dst.ipAMQP 62 (empty) # Restarting broker dst.ipsrc.ip TCP 60 5672 â 53505 [ACK] Seq=674 Ack=1350 Win=33536 Len=0 # Trying to send a message (timeout) src.ip dst.ipAMQP 405transfer dst.ipsrc.ip TCP 60 5672 â 53505 [ACK] Seq=674 Ack=1701 Win=34560 Len=0 dst.ipsrc.ip AMQP 62 (empty) src.ip dst.ipTCP 54 53505 â 5672 [ACK] Seq=1701 Ack=682 Win=64854 Len=0 src.ip dst.ipTCP 54 53505 â 5672 [RST, ACK] Seq=1701 Ack=682 Win=0 Len=0 dst.ip = dispatcher ip src.ip = broker ip # Connecting