On Fri, Jan 23, 2009 at 4:08 PM, jerry gay <jerry....@gmail.com> wrote:
> On Fri, Jan 23, 2009 at 12:37, Dave Whipp <d...@dave.whipp.name> wrote:
>> I could also imagine writing code that reads from an Sqlite database, and
>> imposes that info onto the test. Whatever mechanism is used, I think we need
>> a language-defined mechanism to supply a stable unique identifier for each
>> test, so that it can be individually tracked and manipulated. Perhaps "is
>> only" is the wrong way to implement the action-at-a-distance, but it does
>> seem better (IMO) than a preprocessor.
>>
> i don't understand the drive to have unique test identifiers. we don't
> have unique identifiers for every code statement, or every bit of
> documentation. why are tests so important/special/different that each
> warrants a unique id? that aside, this functionality sounds like it
> can be encapsulated in a module, if desired. as it stands, i can't see
> a reason reason it *has to* be made available in the core.

Unique test identifiers are helpful because you can then track the
progress of a specific test across platforms or revisions.

> as a recap, the discussions larry, patrick, moritz and i (and others,
> i'm sure) had on this topic long ago led to agreement that the most
> important characteristics for a portable specification test suite
> were:
>
> ~ the tests should be organized in such a way that it makes it easy to
> figure out to what bit of spec is under scrutiny
>  (addressed by directory/filename standardization and smartlinks)
> ~ the test files mustn't be cluttered with code that implementations need 
> ignore
>  (comments are used, which are by default ignored, and can be
> preprocessed to customize the test for each implementation)
> ~ the skip/todo markers should be as close to the relevant tests as
> possible, so they're less likely to fall out-of-sync
>  (the markers are in comments in the test file, directly above the tests)
>
> it's my view that spec tests should be easy to maintain for developers
> of multiple implementations, and uniqueness is an overly burdensome
> constraint.

A simple algorithm (used by tcl's spec tests) is to have each named
test correspond roughly to the name of the file (which in turn
corresponds roughly to the name of the feature being tested), and then
increment vaguely numerically. e.g:

dict-1.1
dict-2.1
dict-2.2
dict-2.3

Then, if they have to add a test in a future revision, then can insert
it between dict-2.1 and dict-2.2, call it dict-2.1-a, and still know
that dict-2.2 is testing the same code, regardless of when that test
was run.

Regards.

-- 
Will "Coke" Coleda

Reply via email to