>> I believe JMeter plugin-like interface between injector and performance
>> repository would enable live monitoring of sytem under test.
sebb>Exactly. The JMeter architecture is designed to allow for easy
integration of 3rd party plugins.

I need to double-check, however I did not manage to specify "path to
plugins.jar folder" via command line option.
Ultimately I would love to have JMeter installation and plugins in
completely separate folders. That simplifies "jmeter upgrade", "check
what plugins are installed", "compose test harness from maven"
usecases.

Are there "asynchronous output" interfaces from JMeter?

Is there a way to send listener result via regular samplers?

sebb> Proper analysis should take place offline after the test has completed.
Very true.
However, it is quite important to perform online analysis to be able
to adjust the test.
Say, adjust the load, fix bugs in script, correct system configuration, etc.

>> One can parse raw csv/xml results and upload for the analysis, however it
>> is likely to create big latency gap between collection and the
>> visualization.
>
sebb> Is that really a problem for most users?
How do we measure that?

Here are the most relevant scenarios for our company:
1) Durability testing. Say, you launch 1..7days long test script.
It is crucial to know if the system is yet stable.
That includes multiple KPIs (throughput, latency, failrate%, memory
consumption, gc, cpu%), and the request processing KPIs are not the
least important ones.
In case JMeter sampler info is not ready until the whole test is
finished it is major drawback.

2) Automated scalability testing. Say you want to identify maximum
load the system will sustain. One way to identify it is to gradually
increase the load and see if the system is stable (e.g. queues do not
build up, failrate is 0, response times are stable, etc).
Having data displayed in near-realtime helps a lot.
Especially, when you run test suite in different environment (e.g.
acceptance of new software/hardware)

3) Tight-schedule testing. When performing load testing at customer
environment (e.g. acceptance of production environment), it is
important to make informed decisions. Is good to see if your test
works as expected when the test is running, not when you've done 4
hours of testing and analyzing afterwards.

4) Red-green detection during regular 30-60min testing. Our scenarios
involve multiple jmx scripts and lots of samples.
We use JMeter GUI only for script development. We use just console for
load testing to avoid injector slowness/out of memory/etc.
Currently it is hard to tell if the test goes as expected: failrate%,
number of samples, response times, etc.

sebb> That would be a sensible addition to JMeter to allow performance data
> to be readily saved to an arbitrary repo.
True.
OOB samplers might work (e.g. http sampler) might work for the "output
interfaces".
For instance "http sampler" under "stats listener" posts the result of
"stats listener" to the configured end-point.

However, I believe this kind of integration should be asynchronous to
avoid impact result collection on the test scenario. If trying to use
sync result posting, we could result into multi-second test hang due
to lag of performance repository receiver.

Regards,
Vladimir Sitnikov

Reply via email to