Hi,

I disagree with the previous message. The principle is good, but our
approaches differ even if we start from the same general idea. For short I
would max out the test injector as much as possible. And by as much as
possible I mean that you don't want that the way you generate load is
affecting the final results. That's true. But in order to know that for
sure, you should validate your assumptions on how your script will actually
work prior to running an important test!! This is a very important idea
that I can't stress enough.

Just by not using the entire CPU, you're not in the safe zone, maybe
something in your scenario isn't valid and impedes the test from generating
load in a predictable fashion (and by predictable, I mean that it won't
surprise the person that created the script, not making a monotonous
script) or maybe it uses too much resources and so on.

Also, if you use a large portion of the machine's resources, it does NOT
imply that your test will be "unstable" and the results affected. IF the
resource consumption and the generated throughput is predictable and
stable, then you did a good job for one, and you can scale up the CPU
consumption (I would leave 10% available to the system, on an EC2 instance,
if the CPU usage doesn't spike during the test, so look for max average
values, not an absolute average usage). In 99% of the times you should be
fine. Especially when the test is 1,000 threads complex. When designing
tests with more threads, the CPU's ability to cycle them might become a
bottleneck, but again you can validate this a priori by gradually
increasing the traffic generated, by creating a mock script that doesn't
hit the servers and uses test client resources the same way as the actual
script will, by monitoring the client machine, by comparing client side
results with server side results, etc. Apart from the CPU, there's also
some variation added by the networking involved, the monitoring needs to be
broad enough (and it depends on what is the resource that the script will
stress more).

Does it affect results? It can, this is why you need to prepare the test
and to be able to compare results with a third party tool. So the second
part of making sure the test is ok is having a monitoring system in place
on the server side as well. All production servers use some level of
monitoring, so why not have it up and running by the time you test? Its a
must imho.

Once your test performs as you intend it, then you won't have to worry
about test injector machines' resource consumption, because you should know
exactly how it will behave. If resources are in check then, the test
results won't be affected either.  Unless it has a bug that we don't know
about :). For simple metrics, I don't think it has.

Cheers,
Adrian S



On Wed, Oct 9, 2013 at 7:54 AM, Shmuel Krakower <shmul...@gmail.com> wrote:

> From my experience the CPU on the load generator should remain idle if you
> want to get stable results from the test, such results where you can
> actually compare with previous tests without the question of whether the
> load generator is overloaded.
>
> A gross number I go with is at the range of 30% CPU usage or less on the
> JMeter host.
>
> Of-course you always need to take into consideration the memory is not
> swapped in OS level (so it can be ~100% RAM usage).
> One other thing is JMeter JVM GC workloads, which should be minimal, you
> should make sure JMeter is not spending too much effort on GC, no matter
> what's the configured heap size.
>
> But that's a good question, I'd also like to know what others are doing.
>
>
>
> Shmuel Krakower.
> www.Beatsoo.org - re-use your jmeter scripts for application performance
> monitoring from worldwide locations for free.
>
>
> On Wed, Oct 9, 2013 at 5:08 AM, Tim Koopmans <t...@altentee.com> wrote:
>
> > At flood.io we find a better measure of performance and impact on test
> > results is JVM heap utilization.
> >
> > For example, this benchmark https://flood.io/954b7d5d79f134 shows
> > degradation of response time over time as heap utilization increases
> >
> >
> https://github.com/flood-io/flood-loadtest/blob/master/benchmarks/results/954b7d5d79f134.md
> >
> > Having said that we were running 30K users on a single JVM. You can find
> > out more about our benchmarks here:
> > https://flood.io/blog/11-benchmarking-jmeter-and-gatling
> >
> > You can correlate increased CPU of course with heavy resource utilization
> > within the JVM, but looking at CPU alone is like trying to measure
> rainfall
> > by listening to it fall on the roof.
> >
> > Regards,
> > Tim
> >
> >
> >
> >
> > Tim Koopmans
> > +61 417 262 008
> >
> > <http://altentee.com/>
> >
> > The Automation Company
> >
> >
> >
> > On Wed, Oct 9, 2013 at 1:00 PM, Ophir.Prusak <
> ophir.pru...@blazemeter.com
> > >wrote:
> >
> > > I'm running a JMeter test using JMeter on an amazon EC2 instance
> (large)
> > as
> > > the load server using 1,000 threads. The load server CPU is steady at
> > about
> > > 90% utilization and memory is at 70%.
> > >
> > > Is there a rule of thumb regarding at what point does the server not
> > having
> > > enough resources impact test results?
> > >
> > > Regarding CPU would you say 90%? 95% 99%? Regarding Memory would you
> say
> > > 90%? 95% 99%?
> > >
> > > Thanks Ophir
> > >
> > >
> > >
> > > --
> > > View this message in context:
> > >
> >
> http://jmeter.512774.n5.nabble.com/Is-my-load-server-causing-results-to-be-in-accurate-tp5718385.html
> > > Sent from the JMeter - User mailing list archive at Nabble.com.
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
> > > For additional commands, e-mail: user-h...@jmeter.apache.org
> > >
> > >
> >
>

Reply via email to