On 26/03/2009, Noel O'Brien <nobr...@newbay.com> wrote:
> Hi,
>
>  I'm running some load testing, nothing too heavy. 10 JMeter threads, ramp up
>  is 30. JMeter is run from the command line and the test plan has no 
> listeners.
>  using the -l flag, I capture the results to a CSV file.

Good.

JMeter version?

>  I then load the results file into JMeter and look at hem from a Summary
>  Listener, View Tree Listener etc. I'm a little confused by the elapsed time
>  results.
>
>  The app under test has the ability to log the time that's taken to receive 
> the
>  request and complete sending the response, which for an auth API call it 
> lists
>  as taking 114ms for example (ronded to the closest ms)
>
>  I've set up tcpdump on the machine running the app (refer to as server
>  henceforth) and the machine running  JMeter (refer to as client henceforth).
>
>  tcpdump on the server reports that the time between the request and response
>  packets to be 113.83ms, while tcpdump on the client reports that the time
>  between the request and response packets 113.96ms
>
>  However for that particular HTTP request, JMeter reports that the load time
>  (elasped time) is 174ms and that the latency is 174 ms also. How is this
>  discrepancy explained?

There is overhead in the OS, Java and JMeter on both sending and receiving.

Note also that the timer resolution will affect the reported elapsed time.
However if you are running 2.3.2+ on Java 1.5+ it will use a higher
resolution timer.

>  As I understand it, the latency is the time taken to receive the first byte 
> of
>  the response and since the load time and latency are the same then it
>  indicated that the response payload was received within 1 ms. Is this 
> correct?
>  Or is it even possible to achieve this? The payload size for this response is
>  106 bytes.

Latency is time to first response.
This may be the entire response, especially for small payloads.

>  From the tcpdump on the client, it's clear that the response packets are
>  available to JMeter after 113ms (*) so I'm concerned about what's causing the
>  increase.

(*) The OS (and Java) have to process the request before tcpdump sees
it, and likewise after tcpdump sees the response.

>  FWIW, the auth request sampler has an XPath Extracter Post processor, but
>  processing time for that's not included in the load time right??. All HTTP
>  Samplers are Java, not HTTPClient.

Post-Processors are not included in individual sample-times.

The HttpSampler does as little processing as possible whilst timing
the sample, but it has to issue the connect - retrying if necessary -
then send the data and process the response.

The connection time is not currently measured separately.

I don't know why the times differ by as much as you are seeing.
Is this the case for all samples, or only a few?

And does it really matter, so long as the server can handle the load
it's supposed to?

>  Regards,
>
> Noel
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscr...@jakarta.apache.org
For additional commands, e-mail: jmeter-user-h...@jakarta.apache.org

Reply via email to