On Fri, Jan 23, 2009 at 11:16:21AM -0800, Dave Whipp wrote:
> I can see that. So the alternative is to give things names and/or tags,  
> so that we can attach parameters remotely.

Hmm, well, we also decided not to use any solutions that encourage
putting the metadata too far away from the place it modifies.
Somewhere else in the same file is perhaps okay (and I can see the
use of tags in messages if the message itself isn't unique, but
then why isn't the message unique?).  But as soon as you have unique
IDs people think they have to move the metadata out to a database,
and then you're back with the same kind of always out-of-date and
out-of-sync errors that we used to get with documentation before POD.
Plus you start getting back into uncertainty as to whether something
external to the program is cheating, unless you can prove a positive
cutoff to the fudging metadata while doing validation testing.
I really like the notion that final validation of 6.0.0 involves
simply running the test files without any reference to outside data.

> Such a mechanism should  
> probably be more general than just tests, so I'll overload "is also" to  
> impose additional traits:
>
>   module MyTests {
>    sub group1 {
>      ok foo :name<test_foo>; ## Q - would a label be better?
>    }
>   }
>
>   MyTests.group1.test_foo is also broken<rakudo>;
>
> presumably this would have some form of wildcarding, or inheritance of  
> the "broken" trait from outer scopes:
>
>   MyTests is also broken<rakudo>;
>
> Not sure if that could work.

I guess I don't see offhand what you're trying to do with that.
Modules are primarily about exportation, and seem like the wrong peg
to be hanging test info on--assuming such metadata even wants to look
like real code, which I don't think it does.  The real code wants to
look exactly like what it will look like when rakudo *isn't* broken
anymore.   Test code should rarely be in the business of asserting
that something is broken.  Or to put it another way, test code that
asserts failure a priori can never prove success.  We must keep a clean
separation between code that proves success and any indicator that says
"don't try this yet".  Every bit of code that is dependent on platform
dependencies is, by definition, not platform independent, and we've got
to keep at least the language validation tests platform independent.

Larry

Reply via email to