>>>>> "BM" == Bob McConnell <r...@cbord.com> writes:

  BM> From: Uri Guttman

  >> 
  >> you can even use perl itself. just write a perl data structure and
  >> call do on it. it can be an anon array or hash which is the
  >> returned value. no need to learn yaml or load another module. you
  >> can also slurp/eval the file (which is what do does anyhow). or do
  >> what i did
  BM> and
  >> put the data in the test file, write a shared driver module which you
  >> load and then you write a simple wrapper to run the tests. also you
  BM> can
  >> still do a plan. all you need to do is prescan the data to count all
  BM> the
  >> tests (and variations) and send that to the test plan. i do that too.

  BM> But QA techs and managers won't be able to read or extend the data in
  BM> Perl. What attracted me to YAML specifically was that other less
  BM> technical folks should be able to modify and add test cases. Once I am
  BM> done with this, it will get checked into Perforce and may eventually
  BM> become part of a Hudson managed regression test suite. I will add more
  BM> test scripts, but they will need to maintain the individual cases. In
  BM> some situations, I expect they will want to completely rewrite the input
  BM> data set using data sets exported from other related systems.

if you need special perl code to handle checking results or generating
expected good results, then yaml won't do. you could put perl code in
strings in the yaml put them in a sub wrapper and eval it. also knowing
how to create a proper test entry isn't something even qa/managers can
always do right. i saw a module test that was checking if two arrays
were the same. it used @ar1 == @ar2 which only compared their
lengths. sure it passed the test but it didn't test what was needed.

  BM> Then the next step will be to figure out how to do fuzz testing on
  BM> those forms. The response to submitting a form will change with
  BM> each incorrect input, which will present a challenge in how to
  BM> detect which response is correct and how to react to each one.

again, this is something that code can help with whereas yaml won't
do. you can code up special subs in the test entry and call them to
generate both inputs and expected outputs with whatever variations you
want. given that the tests will be run sequentially in the array of
hashes, you can also design it to keep some data/objects around between
each test so you have history to work with. this is done in the driver
code that is shared. you would put in some special option to mark this
data or something to be kept around.

  >> 
  >> you write pairs of inputs and expected outputs in your data. not
  BM> tricky
  >> at all. in my structures i have input data, golden sort code (which is
  >> supposed to generate expected results) or expected results. you then
  >> compare the expected to actual results for the ok() call. this is very
  >> easy for almost any kind of test where you could call ok().

  BM> The problem is that I can't define a one to one pairing. There are
  BM> multiple fields in the forms that interact, so one value in a
  BM> field may cause different results depending on the values in one
  BM> or more of the other fields. For example, if I enter a valid VISA
  BM> card number but select Discover from a drop down, whether
  BM> everything else is correct or not, I should get a specific
  BM> error. When they match correctly and I enter a valid but incorrect
  BM> zip code for that card, I get a different error. Even with the
  BM> correct zip code, an incorrect street address will produce another
  BM> error. (No, I haven't figured out what happens if both the zip and
  BM> address are wrong. I assume one of them takes precedence, but the
  BM> vendor's document doesn't say.) I just can't see any logical way
  BM> to map all of these interactions.

you don't do one field to one result. you can do a set of fields to a
set of results with hashes. data is data. you can generate many
combinations (not all as you say) with code. combinatorics to the
rescue. and you can also handle errors too. in one of my modules with
table driven testing i have a file just to check errors. it checks all
the error reporting styles and also looks for each possible error
text. sure it was easier to generate the mistakes in a simpler module
but you can do this in yours as well. set up a normal set of data and
then each test could override one or more fields. the normal set is
shared among all the tests and each table entry (a hash as i keep
saying) would have an override data field (which could be a hash
again). there would also be an error expected result value too.

again, check out sort::maker for how i do this. it can be done in your
case as well and save you tons of work.

uri

-- 
Uri Guttman  ------  u...@stemsystems.com  --------  http://www.sysarch.com --
-----  Perl Code Review , Architecture, Development, Training, Support ------
---------  Gourmet Hot Cocoa Mix  ----  http://bestfriendscocoa.com ---------

-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/


Reply via email to