-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 31/03/11 05:40, Eric Schulte wrote:
> Suvayu Ali <fatkasuvayu+li...@gmail.com> writes:
> 
>> On Wed, 30 Mar 2011 15:42:19 -0600
>> "Eric Schulte" <schulte.e...@gmail.com> wrote:
>>
>>>>
>>>> This suite should actually be updated with effectively each patch
>>>> which introduces new features and run after each patch.
>>>>  
>>>
>>> Agreed, in a perfect world...
>>>
>>>>
>>>> So is it only necessary to add meat to this framework?
>>>>  
>>>
>>> Yes, I believe the best way forward would be to add tests to the
>>> existing framework.
>>
>> I have a possibly completely useless idea regarding "automatically"
>> checking regressions. As far I understand the problem now is its not
>> very feasible to do automated tests with what ever test suite we have
>> (or will have after the improvements) and see the exported results for
>> each patch, as at some step it involves human intervention (as in, was
>> the export good).

Good or not good is subjective - but *consistent* is not - and
consistency is important. Even if I do not like the default LaTeX output
from org, I can tweak it, but there is a problem if there are unexpected
changes in the export, which break my customizations or make it
difficult to recreate old documents, especially if these changes are not
documented.

>>
> 
> I would disagree that we need user interaction in the test suite.  There
> are already fully automated tests which (e.g., export to some backend
> like html or tex and then programatically check for properties of the
> exported results.

Exactly - the tests is R work this way: you have some code which is
executed, and then the resulting *output* is redirected into a file.
these results are then compared to a reference output, and if they are
not *identical* an error is raised.

Similar could be done in org: export to LaTeX should always result in
the same output, unless a change is intended (e.g. additional headers,
improvements, ...). So one could compare the resulting .tex file with a
reference .tex file for this test automatically, without user intervention.

> 
> It is certainly likely that I am missing something, but I can't think of
> a situation or a feature of Org-mode which could not be tested under the
> current setup (mainly due to the fact that *every* user action in Emacs
> reduces to a series of function calls which could be programatically
> recreated).
> 
>>
>> So maybe we can have a directory on the Worg website (not part of the
>> Worg git repo) where every week or so the test suites will publish with
>> what ever the org-mode head is at the time for all the supported
>> formats. Then us "puny lisp illiterate" users can check up on it over
>> the course of the week and report back to the list if there is a
>> problem.
>>
>> Since this way people can look at the export formats they are
>> interested in, none of the formats get treated like a step child
>> either. Would that be feasible? Or did I completely misunderstand the
>> problem at hand?
> 
> I'd think that a better way for contributing to the test suite in a
> non-lisp manner would be to submit test cases, e.g. "this block of
> Org-mode text should export to this but sometimes instead exports to
> this", or "when I press this key sequence in this place in this org-mode
> text I expect x to happen to the text".

Correct - this is what we would, in addition to programmatic tests of
individual functions, need. I would actually say that the exports /
tangling / agendas / ... are the possibly the more important test cases,
as they 1) only show in a later stage of ones project, and 2) errors in
functions are easily detected by users and reported - and fizxed quite
quickly and finally 3) I guess an export / ... includes quite a lot of
functions, which are therefore tested as well (kind off...).

> 
> We could even potentially leverage the existing Emacs macro system to
> build a *record* method so that users could semi-automatically record
> their actions allowing an interactive method of recording tests (or
> submitting a re-creatable bug report).  Or at least recording enough
> information so that someone with a little bit more elisp-fu could wrap
> the recorded actions into a unit test.

That would be brilliant. Like the error reporting:

atach the current buffer, record what was done and *the individual
configuration of org / emacs* and finally email / upload it to an
address, where it is automatically added to other submitted test cases,
might bring us a long way closer to an very useful test base. I am
actually ot aware of any other test framework, which let's "normal"
users submit test cases via email / internet - I think that would be a
very useful addition.


> 
> Hope this is helpful -- Eric

Most definitely,

Rainer

> 


- -- 
Rainer M. Krug, PhD (Conservation Ecology, SUN), MSc (Conservation
Biology, UCT), Dipl. Phys. (Germany)

Centre of Excellence for Invasion Biology
Natural Sciences Building
Office Suite 2039
Stellenbosch University
Main Campus, Merriman Avenue
Stellenbosch
South Africa

Tel:        +33 - (0)9 53 10 27 44
Cell:       +27 - (0)8 39 47 90 42
Fax (SA):   +27 - (0)8 65 16 27 82
Fax (D) :   +49 - (0)3 21 21 25 22 44
Fax (FR):   +33 - (0)9 58 10 27 44
email:      rai...@krugs.de

Skype:      RMkrug
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2UKhoACgkQoYgNqgF2egr8LACdEUpds2WnA9LYRHugLlQ9jrK5
dvoAn3A9qu3F5eWB5OFgYJlrgHds8BHW
=6OYS
-----END PGP SIGNATURE-----

Reply via email to