I wrote:
>I think that it'd also be nice to get some consensus on which format of
> test we should maintain: the table version, or the raw-code version.

"Joseph F. Ryan" wrote:
> I think the consensus when Chromatic brought the subject
> up was to use the testing system that Parrot uses; however,
> your table version is kinda nice.

Well, with a very minor tweak, it does use the same testing system
as Parrot: it's just a bit more structured.

The minor tweak is to generate the CODE and OUTput chunks
into strings (instead of stdout), and then call C<output_is> directly.
That style can be harder to debug, though.

"Tanton Gibbs" <[EMAIL PROTECTED]> wrote:
> I also agree that the table version is nice.  However, I wonder how easy
it
> will extend.  Right
> now all we're doing is printing literals and make sure they appear
> correctly.  It may not be so easy
> when we have to concat 5 strings and interpolate entries.  Therefore, I
> would recommend sticking with the testing system Parrot uses.

I was wondering when someone would bring that up (someone always
does). Extensibility doesn't matter: the code generator's specific purpose
is to generate tests of numeric literals. If that isn't what you want, use
a different generator; or just stick with hand-coding.

If people are happy to use these data-oriented test-scripts, then I'm
happy to examine various groups of tests and find their abstractions.
It's just basic data-modeling, applied to source code. By modeling
each file independently, I avoid the problems associated with
infinitely flexible abstractions. I usually find that after a few rounds
of refactoring, some of the abstractions become reusable.

To address your specific "concat * 5 + interpolate" issue: presumably
such things are not done in isolation. There will be a family of tests,
which do N concatenations with M interpolation styles. A (different)
code generator can easily generate such a family, exhaustively.

For each family of tests, I think the correct approach is to start by
writing the test-code manually. But as soon as abstractions become
apparent, a session of refactoring can make the tests much more
readable and maintainable.


Dave.


Reply via email to