Mikhail Loenko wrote:
Do you mean that for a single test that verifies 10 lines of code
working on very specific configuration I have to create a parallel test tree?

The model I had in my head was based on two things - making it easy for the casual browser to figure out what is what, and making it simple to maintain our infrastructure.

I imagine that we have a big pool of unit tests that work on the boring, everyday, non-exotic situations. For ease of comprehension and use, that's in

   test/java/....

so then our test framework can be configure to just run every test it finds in there. With ant driving junit, it's easy - just write and drop in that tree. it will get run.

Then, if we have odd tests, like "test that can only be run on tuesday on sparc when moon is full and I had a cheese sandwhich the day before", we can have a 'odd test' tree

   oddtest/java/....

and then because you have all those conditions to setup (tuesday, sandwhich, moon...) you'll have to have a more exotic config for the test harness anyway, so you will have to call out that test by name anyway in the config, and thus when you are browsing through oddtest/java tree, you'll know that in order to undertand the context of any test you find in there, you need to look back at the test harness to know what's going on.

See what I mean?


What about tests that work in two different exotic configurations? Should
we duplicate them?

No.

geir


Thanks,
Mikhail

On 1/26/06, Geir Magnusson Jr <[EMAIL PROTECTED]> wrote:
one solution is to simply group the "exotic" tests separately from the
main tests, so they can be run optionally when you are in that exotic
configuration.

You can do this in several ways, including a naming convention, or
another parallel code tree of the tests...

I like the latter, as it makes it easier to "see"

geir


Mikhail Loenko wrote:
Well let's start a new thread as this is more general problem.

So if we have some code designed for some exotic configurations
and we have tests that verify that exotic code.

The test when run in usual configuration (not exotic one) should
report something that would not scare people. But if one
wants to test that specific exotic configuration that he should be
able to easily verify that he successfully made required conf and
the test worked well.

The following options I see here:
1) introduce a new test status (like skipped) to mark those tests that
did not actually run
2) agree on exact wording that the skipped tests would print to allow
grep logs later
3) introduce tests-indicators that would fail when current
configuration disallow
running certain tests

Please let me know what you think

Thanks,
Mikhail Loenko
Intel Middleware Products Division




Reply via email to