Dear all,

I combine these notices as both are bugfix releases, and they are released simultaneously because one depends on the other.
See release notes in the downloads for details.

Regards,
Geoff Bache

About (See http://www.texttest.org for more details):
=====
TextTest is a tool for automatic text-based functional testing. This means running a batch-mode executable in lots of different ways from the command line, and using the contents of produced text files as a means of controlling the behaviour of that application.

It is written in Python using PyGTK for its user interfaces, and is supported on POSIX-based systems and Windows (2000,XP,Vista).
Features include:
- Filters output to avoid false failure
- Manages test data and isolation from global effects
- Automatic organisation and grouping of test failures
- “Nightjob website” to get a view of test progress over time
- Performance testing
- Integrates with Sun Grid Engine for parallel testing (and LSF)
- Various “data mining” tools for automatic log interpretation (includes integration with bug trackers) - Interception techniques to automatically “mock out” third-party components (command line and network traffic).
- Integrates with xUseCase tools for GUI testing (e.g. PyUseCase below)

About PyUseCase (See also http://www.texttest.org/index.php?page=concepts&n=xusecase):
=============

PyUseCase is a record/replay layer for Python GUIs. It consists of two modules: usecase.py, which is a generic framework for all Python GUIs (or even non-GUI programs) and gtkusecase.py, which is specific to PyGTK GUIs. See www.pygtk.org for more info on PyGTK.

The aim is only to simulate the interactive actions of a user, not to verify correctness of a program. Essentially it allows an interactive program to be run in batch mode. Another tool is needed for verification of behaviour, for example TextTest, also available from SourceForge.

The idea of a "use-case" recorder is described in some detail in a paper at http://www.carmensystems.com/research_development/articles/crtr0402.pdf

To summarise, the motivation for it is that traditional record/replay tools, besides being expensive, tend to record very low-level scripts that are a nightmare to maintain and can only be read by developers. This is in large part because they record the GUI mechanics rather than the intent behind the test. (Even though this is usually in terms of widgets not pixels now)

Use-case recorders like PyUseCase are built around the idea of recording in a domain language via the developer setting up a mapping between the actions that can be performed with the UI and names that describe what the point of these actions is. This incurs an extra setup cost of course, but it has the dual benefit of making the tests much more readable and much more resilient to future UI changes than if they are recorded in a more programming-language-like script.

Another key advantage is that, because we instrument the code anyway to create the above mapping, it is easy to tell PyUseCase where the script will need to wait, thus allowing it to record "wait" statements without the test writer having to worry about it. This is otherwise a common headache for recorded tests: most other tools require you to explicitly synchronise the test when writing it (external to the recording).

Example recorded usecase ("test script") for a flight booking system:

wait for flight information to load
select flight SA004
proceed to book seats
# SA004 is full...
accept error message
quit

--
http://mail.python.org/mailman/listinfo/python-announce-list

       Support the Python Software Foundation:
       http://www.python.org/psf/donations.html

Reply via email to