Re: Fw: Junit xml report output
Hi you'd have to transform the result - google indicates someone has done it ( https://github.com/tguzik/m2u ) On Sun, Jan 15, 2017 at 12:02 PM, Adrian Cosmici wrote: > > > > Hello guys, > > I am trying to run jmeter tests with VSTS and Maven and currently I am not > able to print results report to VSTS because it's not capable to print .jtl > file. > > Could you please help me to find a solution for getting xml (junit) report > file instead of .jtl? Please note that I am running the tests with Maven > and jmeter-maven-plugin. > > Thanks in advance, > Adrian Cosmici > > > The information in this email is confidential and may be legally > privileged. It is intended solely for the addressee. Any opinions expressed > are mine and do not necessarily represent the opinions of the Company. > Emails are susceptible to interference. If you are not the intended > recipient, any disclosure, copying, distribution or any action taken or > omitted to be taken in reliance on it, is strictly prohibited and may be > unlawful. If you have received this message in error, do not open any > attachments but please notify the Endava Service Desk on (+44 (0)870 423 > 0187), and delete this message from your system. The sender accepts no > responsibility for information, errors or omissions in this email, or for > its use or misuse, or for any act committed or omitted in connection with > this communication. If in doubt, please verify the authenticity of the > contents with the sender. Please rely on your own virus checkers as no > responsibility is taken by the sender for any damage rising out of any bug > or virus infection. > > Endava Limited is a company registered in England under company number > 5722669 whose registered office is at 125 Old Broad Street, London, EC2N > 1AR, United Kingdom. Endava Limited is the Endava group holding company and > does not provide any services to clients. Each of Endava Limited and its > subsidiaries is a separate legal entity and has no liability for another > such entity's acts or omissions. >
Re: Correct configuration of JMeter for testing TPS allocated
On 16 January 2017 at 13:26, alexk wrote: > Hello, > > I was given access to a web service API that implements a throttling policy > per user account. Each account has an allocated TPS. In my case the > allocated TPS is 80. > > The problem is that for many requests that I make to the API I get: "HTTP > Error 429 -- Too many requests try back in 1 second" even when I set my > client's TPS to 50. > > I am pretty confident about the throttling I am performing on my end (used > both Thread sleep and Guava's RateLimiter to test) and when I brought this > up with the API owner they asked me to test with JMeter as they have done > and they have certified that their API correctly implements TPS allocation. > > This is what I did. Testing from my production server's CLI with the > configuration I have attached (http_req.jmx) I received again 429 errors. > > The way I did the test is that I created a thread group with: > > Number of threads: 10 (which is the number of threads my Java client's > threadpool has configured) > Ramp-up Period: 10 > Loop count: Forever > > And a timer: > > Constant Throughput Timer: 4800 That looks fine, assuming the timer was set to calculate the throughput based on all the threads, not per thread. > They told me that they did the test in a different way: > > Number of threads: 80 > Ramp-up Period: 1 > Loop count: 1 > > I believe that their approach is not the correct way to perform the test as > it does only one time the request and for my case I face the issue after a > while. However, given my very limited exposure to JMeter I am not confident > enough about my claims and approach either. Their approach means the max throughput will depend on how quickly their server and JMeter get warmed up. It's hardly ever correct to use a loop count of 1. > Could someone confirm whether my approach is correct or indicate where I am > wrong. Also I would really appreciate if someone could give a brief > explanation to convey to the owner's testing team about any (if there are > any) flaws in their approach of using JMeter testing. See above. I had a quick look at the attached JMX. I would recommend disabling/removing the Tree View and Table Listeners as they are expensive. Add a Summary Listener instead, as that will show the throughput. Also you can replace the HTTP Sampler with a Java Request sampler. Set the sleep time/mask according to the expected response times from the server. You can then run the test and see how JMeter behaves. I just tried using the default sleep settings and it took quite a while for the throughput rate to build up to 40. If you know how long the test takes and how many requests were serviced you can manually calculate the average throughput as a cross check. As I recall, the Summary listener calculates the cumulative rate rather than the peak rate, so I suppose it would be possible for the server to see a temporary overload if it used a short measurement period. If you record basic test results (start time and elapsed should be enough) you can process the file to measure the gaps between the samples, in case that is how they are measuring TPS. > Thank you in advance > Alex > > > - > To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org > For additional commands, e-mail: user-h...@jmeter.apache.org - To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org For additional commands, e-mail: user-h...@jmeter.apache.org
Correct configuration of JMeter for testing TPS allocated
Hello, I was given access to a web service API that implements a throttling policy per user account. Each account has an allocated TPS. In my case the allocated TPS is 80. The problem is that for many requests that I make to the API I get: "HTTP Error 429 -- Too many requests try back in 1 second" even when I set my client's TPS to 50. I am pretty confident about the throttling I am performing on my end (used both Thread sleep and Guava's RateLimiter to test) and when I brought this up with the API owner they asked me to test with JMeter as they have done and they have certified that their API correctly implements TPS allocation. This is what I did. Testing from my production server's CLI with the configuration I have attached (http_req.jmx) I received again 429 errors. The way I did the test is that I created a thread group with: Number of threads: 10 (which is the number of threads my Java client's threadpool has configured) Ramp-up Period: 10 Loop count: Forever And a timer: Constant Throughput Timer: 4800 They told me that they did the test in a different way: Number of threads: 80 Ramp-up Period: 1 Loop count: 1 I believe that their approach is not the correct way to perform the test as it does only one time the request and for my case I face the issue after a while. However, given my very limited exposure to JMeter I am not confident enough about my claims and approach either. Could someone confirm whether my approach is correct or indicate where I am wrong. Also I would really appreciate if someone could give a brief explanation to convey to the owner's testing team about any (if there are any) flaws in their approach of using JMeter testing. Thank you in advance Alex false false continue false -1 10 10 1484320103000 1484320103000 false Content-Type application/json Accept application/json Authorization KEY Ocp-Apim-Subscription-Key AZURE-KEY username USERNAME HOSTNAME 5000 5000 https /async HttpClient4 6 true false {"amount":"2000","id":"148431677040014","user_id":"a-1236548984-cd","serviceName":"service10"} = HOSTNAME 5000 5000 https /async POST true false true false HttpClient4 false false saveConfig true true true true true true true true true false true true false false false true false false false true 0 true true true true true false saveConfig true true true true true true true true true false true true false false false true false false false true 0 true true true true true 2 throughput 4800.0 0.0 - To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org For additional commands, e-mail: user-h...@jmeter.apache.org