On 07/01/13 21:48, Scott Kostyshak wrote:
> I'm curious about is why there are
> so few tests for LyX.
> 
> In particular, I'm interested in the following two questions:

let's try the funny ride... :-)

> 1) Why don't you write tests?

I can tell u why I wrote a few: after a tiny change in the logic for Advanced 
Find and Replace, I needed a way to ensure that all the basic usages of the 
feature were keeping working as expected, with various options 
enabled/disabled, in various cases, and it's impossible to run such tests 
manually (I was doing that until I discovered the amazing monkey testing 
framework by McCabe-Dansted to automate sending key presses to LyX -- only that 
I didn't need random presses, but specific ones creating my needed test 
scenarios).

> 2) Why do you think others don't write tests?

a real testing framework was not really designed/advertised, AFAIK. Just, when 
needed for critical regressions, someone wrote a few [semi-]automated tests.

> a. I do. They're called assertions.

hmmm... there's quite some difference, but assertions help of course (see 
below).

> b. The time it would take me to write a test would be better spent on
> some other aspect of LyX development.

Actually, with such a short time to dedicate to the project, and just for fun, 
why having fun doing the non-funny stuff :-)?
But, it's also non-funny when the d..n thing crashes while you're writing the 
d..n paper!

> c. No one runs the tests.

ehm..., perhaps whoever takes care of the release might take such 
responsibility?
(assuming all tests would work on the platform/OS used for the purpose, but I 
suspect it's not really the case)

> f. The initial setup (dependencies, and cmake vs. autotools) of the
> getting the tests to work is too annoying.

guess not more annoying than setting up the whole development environment for 
LyX, with all the needed libraries, headers and tools.

> Some tests seem difficult to write. For example, I find Tommmaso's
> advanced find tests to be very creative but I imagine they were
> time-consuming to write. Thus, my next question is:
> 
> 3) What are the types of tests that are the most useful in the context
> of LyX and is there anything I can do to make writing those tests
> easier?

very subjective: useful for what? Is there any kind of statistics about
the most annoying bugs/problems, that would allow for prioritizing
bugs? Guess we have only the criticality in the trac. One possible
action might be: when the fixed bug was leading to a SEGFAULT, write
a test ensuring it doesn't repeat in the future.

> My own attempt at an answer:
> The tests that are the most useful are those that are the easiest to
> write (because then we will actually write them), which in LyX are
> tests that
> (a) can be expressed by a command sequence, and
> (b) trigger an assertion or crash (so there's no need to redirect with
> a debug flag to a log and use pcregrep; and because a crash is often a
> bad type of bug from the user perspective).

But, most of the tests I currently have in the findadv scenarios, don't
trigger assertion, but they simply would fail to find the right stuff
if the find logic has some problems (or, they find what should not be
found etc.).
Nor it would make sense to add "triggerable" assertions to LyX, I mean,
I trigger an LFUN that's supposed to find smth., but it doesn't find it,
LyX is supposed to give a notice to the user and that's it, not to assert!
That's why the "white box" type of test, that exploits that ugly debug
log. Though, most tests need only to know exactly the position of the
found match and compare with a known position. An equivalent way to get
such information out of lyx would suffice.

Also, debug messages might easily be considered completely optional
and suppressible once there's confidence that the functionality works,
but if one removes some key messages, or even alters only slightly them,
then all findadv tests will fail! I don't like such fragility of course.
A testing framework might include a special TestLog() service (or as a
debugging level) useful to tag those messages that must not be changed,
and whose output goes to a special testing log (rather than the debug
log). Just a proposal.

Of course, one can write
independent test programs that link only the relevant findadv.cpp
files, and invoke those methods in a very specific way, so to get rid
altogether of the need for launching the GUI for testing the find logic,
and speeding up the whole tests execution etc.

> Currently, writing this type of tests is pretty easy, but perhaps it
> could be made even easier by just asking the developer to add one line
> to a file, say LFUNcrashes.txt:
> 
> command-sequence lfun1; lfun2; lfun3 # see [trac bug number]

see my comment above about non-crashing bugs.

Thanks for sharing/triggering this.

        T.

Reply via email to