On Mon, Oct 26, 2015 at 03:33:22PM +0000, Guenter Milde wrote:
> On 2015-10-26, Liviu Andronic wrote:
> > On Mon, Oct 26, 2015 at 10:24 AM, Guenter Milde <mi...@users.sf.net> wrote:
> >> On 2015-10-26, Scott Kostyshak wrote:
> 
> ...
> 
> >> Could this prevent some of the regressions? (We need to look carefully, not
> >> only if the relevant documents compile without error, but also if the
> >> exported document is OK.)
> 
> > By exported document do you mean .tex or .pdf? If it is .tex, would it
> > be a good idea to check whether the latest export is identical to a
> > reference .tex generated when creating the test; if not, display a
> > diff.

There are similar things Kornel and I have discussed. We would like to
have tests that do what you suggest for lyx2lyx. For example, we first
export to .tex, then we convert the .lyx file to an older version then
convert it back and export to .tex again and see if the files differ.
This is nice because it is quicker than compilation, and as you say it
might catch bugs that relying on exit code would not.

> > Simply relying on the exit code seems like an easy way to miss
> > non-error generating regressions...

Agreed. 

> I think we have to distinguish several test methods:
> 
> a) automatic export tests with lots of "real life" documents (manuals,
>    templates, examples)
> 
> b) functional tests: import/export of a complete test document and comparing
>    it to the expected output
>    
> c) unit tests: test of certain features or sub-units.
> 
> Unit tests (c) are currently not implemented, AFAIK.

There has been some little talk of implementing them but nothing in the
end was done. From what I understand, it would be a huge change and it
wasn't clear whether it was worth the effort and if everyone would use
the tests.

> The tex2lyx tests are "functional tests" (b), where we keep track of the
> expected output. They include also export tests (in the "round trip" suite).
> Here, we have to manually check and update the "expected" output documents
> discriminating between intended changes and regressions/bug-indicators.
> 
> For a), it would be a pain to keep track of and update all the output
> documents, because this would not only be required for different export
> routines but also for changes in the input docs. However, if the exit
> status of a test changes (from fail to pass or vice versa), we should
> check whether this is due to a new bug, a fix or just exposing previously
> hidden problems.

Agreed. In fact, sometimes it is an improvement when a test fails. When
I check manually, it was passing before but showing garbled text in a
PDF output. Now it might fail with a clear message of "language xyz not
supported". It is always good to check manually why something fails and
if it passes if the PDF output is good.

Scott

Reply via email to