In 24/03/2008, infernix <[EMAIL PROTECTED]> wrote:
> Hi,
>
>  I've been playing with getting useful results from a very large dataset,
>  and am wondering whether if others have suggestions or better solutions.
>
>  Jmeter ran with a testplan that has:
>
>  - 1 thread group, 1200 threads, ramp-up 60s, scheduled for 6hrs

I would probably use a longer ramp-up for that many threads,
especially since the test runs for a long time.

>  - 7 http requests, all with a response assertion (validates content),
>  and a Result Status Action Handler (stops thread)
>  - 1 Assertion Results that logs errors only
>  - 1 Aggregate Report that writes to the JTL file.

Summary report uses less memory.

[Simple data writer even less]


Use CSV format for smaller output files, and only save the minimum
fields needed.

>  - the point being: simulate a real-world test for a flash app that
>  queries a cluster with 9 requests per view, 20 viewers (aka clients) per
>  second, for 6 hours straight
>
>  After letting this run for 6 hours, I got a 1.6GB large JTL file. I
>  already figured out that it's impossible to load into jmeter. It's also
>  impossible to parse with the XSL templates. I found out that xsltproc
>  and xalan are way too slow (took >10 minutes on a 33MB jtl file), but
>  after a long search I grabbed Saxon 6.5.5
>  (http://saxon.sourceforge.net/) and this gave me good results.
>
>  Saxon ran out of memory on the 1.5GB JTL file as well (with Xmx3000m,
>  sun-java6). It turned out that it could handle files of about 200MB. So
>  I hacked up a fairly simple piece of perl I found on the net to split it
>  in files of 750000 records each (http://pastebin.ca/955380).
>
>  The resulting files can be processed by JMeter again, but the graphs
>  that are generated make no sense. So I have been playing around with the
>   xsl files in extras, and changed the jmeter-results-report_21.xsl
>  report a bit to include some extra information such as the time of the
>  first sample, last sample (e.g. test start+end times), and total MBytes
>  of data. This updated xsl (http://pastebin.ca/955390) does need
>  xsl-date-time (http://www.djkaty.com/drupal/xsl-date-time) to convert
>  unixtime into ISO time.
>
>  So, I now have a HTML report for each 750000 records, and I simply
>  concatenate the 9 html reports into one HTML file. But that's not the
>  only information I need - I can't really make a useful graph out of it
>  that has a timescale.
>
>  Next I used jtl2csv.py off the wiki and hacked it so it converts the
>  original 1.5GB Assertion Results JTL file into a Simple Data Writer log
>  (http://pastebin.ca/955398). I then hacked jtlmin.sh so it also adds
>  maxresponsetime in its OUT file (http://pastebin.ca/955409). So now I
>  have an OUT file with which I can actually create a graph per minute -
>  and this does work on the full test results, which is a good thing.
>
>  After this learning curve, I stumbled upon StatAggVisualizer
>  (http://rubenlaguna.com/wp/better-jmeter-graphs/) and obviously I
>  immediately wanted to try it on my dataset. Unfortunately, it didn't
>  include makeTitlePanel() in the binary, so I grabbed the source,
>  uncommented that line, built it with ant and loaded it into JMeter. But
>  when I choose one of the split JTL files in the StatAggVisualiser
>  TitlePanel file field, it does not parse it and generate the graph. This
>  is rather unfortunate, as it seems that this one only works if you add
>  it to the test plan while it runs.
>
>  Okay, so now for the questions part.
>
>  - Am I doing things the right way [tm] with regards to the test plan setup?

See above.

>  - How can JMeter itself generate useful results (read: graphs with a
>  timeline) out of very large tests like this?
>
>  - Can someone look at the StatAggVisualizer and patch it so it loads
>  .jtl datasets and generate a graph out of it? I've tried this, but I am
>  simply not a coder.
>
>  - Last but not least, if others have experience with testing large
>  datasets like this, and are willing to share their insights, I would
>  really appreciate it. There's stuff on the wiki but not all of that can
>  be applied to huge datasets.

CSV files can be used as input to all sorts of stats packages and/or
spreadsheets which may offer more flexibilty.

>  I've watched the Google presentation by Goranka Bjedov
>  (http://video.google.com/videoplay?docid=-6891978643577501895) which was
>  enlightening, but unfortunately there's little detail on the actual data
>  processing after it. It seems to me that they convert all their data to
>  SQL, insert it into MySQL and query it for useful results, but this
>  would mean that one has to develop a frontend for querying all that data
>  - which is beyond my project scope.
>
>  Thanks in advance for any help!
>
>  Regards,
>
>  infernix
>
>  ---------------------------------------------------------------------
>  To unsubscribe, e-mail: [EMAIL PROTECTED]
>  For additional commands, e-mail: [EMAIL PROTECTED]
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to