Jeff Baird wrote:
I wrote a small program to insert the SQL parameters into the SQL so log file has one SQL statement per line w/ no parameterization.

I started with a trace collected by the database itself, so all parameters were set in the original file. From there I defined my own parameters for update and insert statements, since the distribution of parameter values in the selects was already representative of 'the real world'


What are the details of how you set up your JDBC sampler to access the SQL coming from the CSV dataset?

There wasn't much to it. I do recall another problem I had though, at some point I must have used an M$ tool to edit one of the CSV files, because the encoding on it changed on me, plunging all attempts to read from it into a sea of despair... (how the hell did I forget about that!)

The JDBC config looks straight forwards. I use a variable name bound to the pool since I have 2 threadgroups simulating 2 applications running concurrently against the same db, but everything else looks obvious. Is your database url correct? driver name? user/pw?

For my CSV config I specify 2 variables 'prefix_query_type' and 'prefix_query_value' where prefix denotes the application I am simulating (the prefix is likely redundant, but I'm still using them). I use a semi-colon delimiter and my file has the format:

Select Statement; SELECT stuff FROM table WHERE ...
Update Statement; UPDATE table SET stuff1 = 'something', stuff2 = 'randomText'... WHERE ...
Update Statement; INSERT INTO table VALUES (counterValue, 'something',  ...

The JDBC Sampler needs to specify a variable name for the pool if you used on previously


Did you get parameterized SQL to work when both the SQL statement and the parameters come from the CSV dataset?

Yes, albeit quite limited. In the JDBC Sampler, I have a custom query type (edit on the combo)

${prefix_query_type}

and a query of the form

${__javaScript(vars.get("prefix_query_value").replace('counterValue'\,'${__javaScript(100000000+${__counter(FALSE)})}').replace('randomText'\,'${__time()}'))}


Reading the query type from the CSV file maintains the 'sequential integrity' of the queries presented to the database. The jmeter queries arrive in the same order as the trace which was originally collected from the running applications.

The javascript processing on the queries allows me to keep inserts in line, by forcing synthetic id values to be larger than 100000000, which means I can delete them at the end of a run. It also allows me to stuff some ever changing random junk into a few fields in the update statements so that the db can't just ignore repeated update queries which have no changes to push through.

I just wished I knew how to speed things up enough so that the database was actually stressed.

As a request for enhancement, I use another tool SQLQueryStress, which has cool support for parameterization. The source of parameter values is itself a SQL query result set. It would be cool to do something similar with JMeter.

--
Shane


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to