I'm porting a Pylons 1 application to Pyramid and I want to write some
automated tests. It has only a few screens but it has a large number of
input variables that are shared between screens, and it makes numerous
calls to a C library (using ctypes) that may return error messages, which
are propagated to the user in a delayed manner similar to flash messages.

Currently I have a pair of rudimentrary Twill scripts to test the new site
and compare it to the old site. But the Twill shell has a lot of
limitaitons so I'm thinking of switching to Twill's Python API or unittest.
So I'm wondering if anybody has any suggestions between these or ideas for
how to design the tests in Pyramid.

I'm leaning more toward functional tests first because the Pylons code was
already written by someone else, and the client is most interested in
whether the site behaves the same as the old site and returns the same
results, as opposed to what each individual function does. I should fill in
those low-level tests later but I think I need some more "practical" tests
first.

So, a basic use case looks like this:
(1) go to the home page, which loads a canned "weather scenario" into the
session (a bunch of numeric and string variables).
(2) go to the chemical selection page and choose a chemical. This loads a
bunch of chemical properties into the session.
(3) choose one of four dispersion models.
(4) each model has 1-3 screens to fill out. Some fields are shown/required
only if other fields are filled in.
(5) submitting each screen does standard form validation (I think it's all
Javascript rather than FormEncode?) and loads its fields into the session.
But it also calls C functions, and the data may be rejected due to some
obscure detail in the C model that can't be caught by ordinary Javascript
or Python validation. E.g., you say the chemical is a liquid with such a
volume, but that chemical can't be a liquid at the weather's temperature.
The C function spits back one or more "stop messages" (errors) and/or "show
messages" warnings, which go into the session and then it redirects back to
the form page. The form page redisplays the form, but the stop and show
messages are extracted from the session to the template and appear in a
Javascript "lightbox" (a kind of modal dialog, with the dimmed form fields
showing around it).
(6) If there were no errors, it continues to a result page which calls some
more C functions and displays a bunch of results.

The client likes the current user interface and lightboxes and doesn't want
any changes there. And with a hundred variables to keep track of, I'm not
inclined to change how they flow in and out of the session and templates.

 So my three choices are Twill scripts, Twill Python code, or unittests.
Twill scripts are becoming annoying limiting, especially for recognizing
server-site exceptions and stop messages. So I'm thinking about Twiil
Python, which mimics a browser programmatically. But I also like Pyramid's
innovations in unit testing. But I'm not sure if unittest can accommodate
the kind of high-level use cases I need with multiple POSTs in a test,
session persistence, and recognizing stop messages. E.g., I may be able to
recognize stop messages by looking in the session, maybe? And I'd also kind
of like to have tests I can also run against the old version, and I'm not
even considering writing Pylons unittests for it, which would be a
different API and is on its way out anyway.

So, anyone have any ideas?

-- 
Mike Orr <[email protected]>

-- 
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/pylons-discuss?hl=en.

Reply via email to