GitHub user tpeyrard opened a pull request:

    https://github.com/apache/jmeter/pull/211

    Measure Time to First byte in JDBCSampler

    Hello,
    
    I use JMeter to measure performance of a database. When executing queries 
returning a lot of rows, the time taken by JMeter to read the ResultSet and 
storing the result in a StringBuilder then calling toString() might be bigger 
than the execution time on the server.
    
    In order to easily understand what's taking time for slow queries, I 
thought it might be interesting to use "Latency" and "Connect Time" of the 
SampleResult to show two informations:
    
    - **Connect time**: time needed to establish the connection (previously it 
was in the Latency)
    - **Latency**: time taken to get the first ResultSet (TTFB) (or whatever if 
it's a statement returning something else)
    
    What do you think of that change?
    
    Regards,
    Thomas

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/tpeyrard/jmeter jdbc

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/jmeter/pull/211.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #211
    
----
commit 2ea7107744fabcd242ea77f5ff91e46487b1407d
Author: Thomas Peyrard <[email protected]>
Date:   2016-06-24T15:28:23Z

    In order to measure time to first byte for JDBC sampler:
    - Use connect time to instead of latency to measure connection time
    - Use latency to measure the time at which the first ResultSet (or 
whatever) is received from the connection

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to