Hello,

Thank you for your answers. It's a lot clearer for me how the JDBC behaves
now. You can find in blue the answers to your questions below.

Regards,
Antoine

-----Original Message-----
From: Rob Godfrey [mailto:rob.j.godf...@gmail.com]
Sent: vendredi 6 janvier 2017 15:52
To: users@qpid.apache.org
Subject: Re: Qpid java broker 6.0.4 - JDBC message store performance issues

On 6 January 2017 at 14:06, Lorenz Quack <quack.lor...@gmail.com> wrote:

> Hello Antoine,
>
> Yes, it is expected that a new Connection is made for each enqueue and
> dequeue.
> The relevant code is org.apache.qpid.server.store.A
> bstractJDBCMessageStore#getConnection which is called from multiple
> places.
>
>
To be clear - the semantic behaviour is that each "transaction" gets a new
connection ...  if you use transactions in the client to combine multiple
enqueues/dequeues then this will all happen on one connection.

*I don't know the code in details but is there a reason for not using a
single connection for the JDBC message store lifecycle?*

Obviously this strategy doesn't work well if you are actually opening a new
TCP connection each time (for in-memory Derby, which the code was
originally written for, it doesn't matter too much if Qpid isn't pooling in
any way, since the connections don't have much overhead), so using a
connection caching provider is pretty much mandatory if you want to use an
external RDBMS.

-- Rob



> We do our performance testing using the BDB store. We do not
> performance test other store types.
> Therefore, it is possible that the JDBC path is not as well tuned as
> the BDB one.
> I do not know of any obvious performance bottle necks in the JDBC code
> (if they were obvious we would have probably fixed them).
> I currently do not have the capacity to investigate this but feel free
> to investigate yourself and ideally provide a patch :)

*I'm in the analysis phase but performance will be in the critical path and
I'll be happy to contribute :-)*
>
> When it comes to performance your exact setup and configuration is
> important.
> To compare local BDB with over the network JDBC is dubious.
> Here are a couple of things to always consider when investigating
> performance
>  * Are you using persistent or transient messages?
*The messages are set to be persistent and the broker queue is durable.*
>  * Are you using transacted or auto-ack sessions?
*It's an AUTO_ACK session*
>  * Are the messages published sequentially by a single producer or
> multiple producers in parallel?
*4 producers are publishing the messages in parallel (they are 4 processes)*
>    If you are publishing/consuming in parallel you might want to try
> tuning the connection pool size.



*Currently, I'm using the default (4 partitions count, 10 connections per
partition max). I tried to change these options in the configuration but
did not succeed.I tried to set for instance { partitionCount: 2,
minConnectionsPerPartition: 1, maxConnectionsPerPartition: 4 } in the
VirtualHost configuration.*
*I looked at the code and tried as well:
**qpid.jdbcstore.bonecp.maxConnectionsPerPartition
instead of maxConnectionsPerPartition (same for min).*

*Do you know how I can change the BoneCP configuration?*

>  * Are you consuming at the same time or are you first publishing and
> then consuming?
*Publishing and consuming at the same time.*
>  * Are the messages being flown to disk (check broker logs for BRK-1014)?
*I did not see this log and I think there is enough memory (16g)*
>    This might happen in low memory conditions and is detrimental to
> performance because message need to be reloaded from disk.
> * Is the broker enforcing producer side flow control?
*I don't think so but how can I check? Should I see a log too?*

>    This might happen when running out of disk space and is obviously
> detrimental to performance.
>  * ...
>
>
> I hope this somewhat helps with your investigation.
>
> Kind regards,
> Lorenz
>
>
>
>
> On 05/01/17 13:45, Antoine Chevin wrote:
>
>> Hello,
>>
>> I ran a benchmark using Qpid java broker 6.0.4 and the JDBC message
>> store with an Oracle database.
>> I tried to send and read 1,000,000 messages to the broker but was not
>> able to finish the benchmark as there was a StoreException caused by
>> a java.net.ConnectException (full stack is attached).
>>
>> I suspected a very high number of connections.
>>
>> I tried using JDBC with BoneCP and the benchmark finished. I could
>> get the BoneCP statistics and for 1,000,000 messages, there were
>> 3,000,000 DB connections requested.
>>
>> It looks like the broker requests a connection when enqueuing and
>> dequeuing the message with the JDBC store. Is it a normal behavior?
>>
>> Also, the benchmark showed that the JDBC store with Oracle was slower
>> than the BDB store. (an average throughput of 2.8K msg/s vs 5.4K msg/s).
>> I suspected a degradation as the Oracle store is located on a
>> separate machine and the broker goes over the network to persist the
>> messages. But not that much.
>> Do you know if there is a possible improvement in the JDBC message
>> store code to narrow the gap?
>>
>> Thank you in advance,
>> Best regards,
>> Antoine
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For
>> additional commands, e-mail: users-h...@qpid.apache.org
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For
> additional commands, e-mail: users-h...@qpid.apache.org
>
>

Reply via email to