Great feedback, Jordi - responses below.
On 11 Aug 2003 at 10:15, Jordi Salvat i Alabart wrote:
> [EMAIL PROTECTED] wrote:
> > I've been using JMeter as a user quite a bit the past few weeks, and I've learned
> > some things
> > about it. One is that it's very tedious to use, and so a lot of my thoughts have
> > to do with
> > creating more powerful tools to manipulate test scripts. I think I'd like to
> > introduce the idea
of
> > alternate ways to view a test plan, ala eclipse, so that different aspects of test
> > plan editing
can
> > be brought to the forefront.
> >
>
> It's true that test editing is tedious, but I don't really see different
> "aspects" in such a heavy way as Eclipse -- maybe visualization options?
>
> Control vs. non-control elements: you had commented in the past about
> control elements (controllers & samplers) vs. non-control elements
> (where order essentially doesn't matter). Would be great to have an
> option to show/hide those non-control elements when viewing the tree.
> Also to see them in a separate panel showing all those applying to the
> current control element -- with 'inherited' ones greyed out. Most
> importantly because it would provide new (and not-so-new) users a
> clearer view of which non-control elements apply to which control elements.
[reordering]
> Bulk editing: A find/replace feature the most obvious. Another nice one
> could be to be able to select multiple test elements of the same type
> and see the editor in the right panel show white fields for values that
> are equal in all of them -- you could edit these straight-away -- and
> fields with different values in grey -- possibly non-editable.
A perfect example - a view that shows you a slice of the test plan, by component type,
and
provides an easy way to edit all at once. I would think that you'd want such code to
not get
mixed up with the existing GUI code, and thus it would be a separate module that
provided you
a different view of things. Right now, too many elements are closely coupled in order
to show
the one particular view of things - JMeterTreeModel, JMeterTreeListener,GuiPackage, for
instance. The tree model should probably be a dumber data model that actors
manipulate,
and that would provide a good start toward implementing other views and editing
options.
>
> Tree editing: Eclipse trees have a nice way of indicating whether to
> insert before, insert after, or add as child which would be very handy
> -- our current way is a pain. I don't know if that's doable in Swing,
> though.
Lots of little things like the drag and drop need polishing - I'd prefer to be able to
drag and drop
multiple files at once, for instance. I'm not sure exactly what you are referring to
with Eclipse (I
don't find myself dragging files around in Eclipse), but I imagine you are thinking of
a system
whereby visual cues are provided to indicate whether you're about to drop an element
into,
above, or below a tree node. I wouldn't think that would be too hard.
>
>
> Protocol pre-selection: by having options on which protocols we want to
> use in the test we could avoid cluttering the menus with samplers &
> config elements not applicable to those protocols.
Yes, and maybe automatic adding of Cookie Managers to plans that include HTTPSamplers?
>
> Screen real-state usage: reducing font size, getting rid of useless
> spacing, etc... so that more space is left for panels such as the HTTP
> request parameters.
Absolutely - I figured people would complain if I changed the font size though.
>
> Another usability issue: it would be really nice to have certain test
> elements provide a "dynamically-generated" default name (used in case
> you leave the Name field blank). E.g. "Timer: 1.5 sec.", "Timer:
> 10.0�5.0 sec.", "/home/index.jsp",...
>
> > Remote testing needs to be revamped because it's pointless to have 10 remote
> > machines
all
> > trying to stuff responses down the I/O throat of a single controlling machine -
> > better to have
the
> > remote machines keep the responses till the end and not risk the accuracy of
> > throughput
> > measurements. Perhaps a simpler format can be created for remote testing whereby
during
> > the test only success/failure plus response time is sent to the controlling
> > machine, and
> > everything else waits to the end of the test.
>
> I agree, but note that this means significant rewrite of all listeners,
> so that they can handle this two-phase input and still show meaningful
> results.
Or the SampleListener interface could be given an extra method:
summarySampleOccurred(long time,boolean success);
Really, all we need to know is that the test is running and samples are happening.
And at the
end of the test, an easy way to retrieve the entire, fully recorded results. Which
could be
handled by your new analysis module.
>
> > I want test results categorized by test run, and not just as a list of
> > sampleResults. A set
of
> > sample results has a metadata set that describes the test run, and JMeter should
> > be able
to
> > use such metadata to potentially combine test run results and also display
> > statistics
> > comparing two test runs (ie, graphing # users vs throughput).
>
> How about leaving listeners for real-time test result visualization &
> test result gathering/saving and having a separate application (or
> module) for more complex data analysis. Maybe there's something in the
> non-market we can use straight away?
Sounds great.
>
> > Result files need to be abstract datasources with an interface that visualizers
> > talk to
without
> > knowing whether the backing data is an XML file, a CSV file, a database, etc.
> > Right now,
> > JMeter knows how to write CSV files, but can't read them!
>
> Note this would make sense if we had the separate analysis application I
> was talking about.
>
> > A defined interface will help us
> > modularize this code whereas currently it's mixed up with the code for reading and
> > writing
test
> > plan files.
> >
> > Visualizers should be able to output useful file types for distribution of results
> > to non-jmeter
> > users. HTML and PNG files, for instance. Some way of exporting the data to a
> > format
that
> > can be easily posted.
>
> Again, a separate analysis tool could take care of this.
>
> > I wanted to make JMeter single threaded with the new non-blocking IO packages, but
> > I
don't
> > think this is feasible.
>
> Definitely not doable for the Java samplers. Extremely difficult for
> JDBC, difficult and probably not worth it for the rest (just my view --
> seems to match your's though).
>
> Instead, I would focus into accuracy by raising priority of threads
> during actual sampling. Would not improve total performance in terms of
> max throughput, but would improve measurement accuracy at mid and high
> loads.
I've thought about this but I don't think it scales up very high. The majority of any
of JMeter's
threads time is spent sleeping, either in timer delay, or waiting for IO. Giving all
your IO
waiting threads a higher priority doesn't help much. I also think it might worsen
things to make
a bunch of threads sitting on IO calls the highest priority!
>
> Some performance and accuracy tests would also be great. I'm thinking on
> how to do those. An important bit would be unused hardware available for
> a long term for this purpose only (or almost)... I think I can provide this.
I've used various techniques to ensure the accuracy of my numbers - primarily to run
an extra
test client with a very low load and comparing its numbers to the numbers of the
high-load
clients. I think the best way to handle it is through documentation to explain these
techniques
and other ways of analyzing data. Another way to help might be a visualizer that shows
samples as a line that demonstrates it's beginning time and end time, making it easy
to see
overlapping samples, and thus see potential timing conflicts.
>
> > It's possible to do if you can get access to the very sockets that do the
> > communicating, but how will you get that for jdbc drivers? Even for HTTP, we'd
> > have to
write
> > our own HTTP Client from which we could gain access to the socket being used and
control
> > the IO for it (or take the commons client and modify it so). Because to put it
> > all in a single
> > threaded model, we'd have to take control of the IO part, and force the samplers
> > to hand
their
> > sockets to some central code that would take the socket, take the bytes the sampler
wants to
> > send, and it would hand back the return bytes plus timing info. It'd be nice, but
> > I don't
think it's
> > feasible for most protocols.
> >
> > JMeter needs to collect more data. Size of responses should be explicitly
> > collected to
help
> > throughput calculations of the form bytes/second. Timing data should include a
> > latency
> > measurement in addition to the whole response time.
>
> Totally agree. The complete split would be:
> 1- DNS resolution time
> 2- Connection set-up time (SYN to SYN ACK)
> 3- Request transmission time (SYN ACK to ACK of last request packet)
> 4- Latency (ACK of last request packet to 1st response data packet)
> 5- Response reception time
> I'm not sure JMeter is the tool to separate 1,2,3 (this is more of an
> infrastructure-level thing rather than application-level), but 1+2+3+4
> separate from 5 is a must. Top commercial tools separate them all.
You mention socket factories - is it possible for JMeter to control all sockets
created within the
JVM? And, if so, couldn't JMeter by that means take control of the low level input and
output?
The question then becomes, how do we match up this data from the low level socket
control to
the Sampler responsible for the data?
>
> More accurate simulation of browser behaviour in terms of # of
> concurrent connections, keep-alives, etc. would also be great. Even in
> terms of available bandwidth: simulating modem/ISDN/ADSL users. Again,
> this may not be JMeter's job -- application-level testing is more
> important, IMO.
>
> The problem is same as above: this requires access to the internals of
> the client code. How to do this for JDBC? Maybe changing socket
> factories? But it's a must, so we need to think about it.
>
> > Multiple SampleResponses need to be
> > dealt with better - I'm thinking that instead of an API that looks like:
> >
> > Sampler{
> > SampleResult sample();
> > }
> >
> > We need one that's more based on a callback situation:
> > Sampler {
> > void sample(SendResultsHereService callback);
> > }
> >
> > so that Samplers can send multiple results to the collector service. This would
> > make
> > samplers more flexible for when scripting in python is allowed - to allow the
> > adhoc scripter
to
> > push out sample results at any time during their script.
> >
> I feel pushing out multiple separate samples belongs more to controller
> land rather than sampler land...
Good point - I'm all in favor of controller's sending out sampleresult events.
>
> > Given this, post-processors like assertions and post-processors need a way to know
which
> > result to apply themselves to. We already have this problem wherein redirected
> > samples
> > confuse these components. We need a way to either mark a particular response as
> > "the
main
> > one" or define a response set all of which need to be tested by the applicable
> > post-
processors.
>
> Isn't the current "sample-tree" structure correct for this? Wouldn't it
> be enough to have post-processors, listeners,etc. know about such
> "structured" sample results?
You're probably right.
>
> > I'd also like to replace the Avalon Configuration stuff with something that can
> > load files
more
> > stream-like and piecemeal, instead of creating a DOM and then handing it over to
> > JMeter.
It
> > goes too long without any feedback for the user. Plus uses a ton of memory.
>
> Maybe javax.beans.XMLEncoder/Decoder can help? (Never used it, just
> adding it to the long list).
>
> > Sun's HTTP Client should be replaced. As the cornerstone of JMeter, we ought to
> > have
one
> > that is highly flexible to our needs, provides the most accurate timing it can,
> > the most
> > performance possible, the least resource intensive as possible, and the most
> > transparency
to
> > JMeter's controlling code. I think the commons HTTP Client is probably a good
> > place to
start,
> > being open-source, we can craft it to our needs.
>
> Totally agree that it needs to be replaced and that the HTTP Client is
> our best bet.
Seems like we all think that.
-Mike
>
> > Well, that's a start :-)
> >
> --
> Salut,
>
> Jordi.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
--
Michael Stover
[EMAIL PROTECTED]
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777