On Sep 6, 4:55 am, Simon King <simon.k...@nuigalway.ie> wrote:
> It seems that due to this problem, my package may compute a different
> (though isomorphic) ring presentation for a cohomology ring on
> different machines.
>
> But in some tests, I create elements with certain properties, and for
> that purpose I need a computationally unique ring presentation, not
> only for testing the result but for setting up the example. So, even
> marking some test as random would not help.

If your tests depend on the choice of random numbers in a non-
deterministic algorithm, they are going to be very fragile. They don't
just test the correctness of your code but some extra arbitrary
behaviour as well. If in the future, GAP would change one of its
internal algorithms you depend on, your test would almost certainly
break.

One possible way to handle this is by rewriting your tests in such a
way that you test the properties that matter (and should not depend on
some underlying implementation), in much the same way how set equality
is better tested by asking sage "are these sets equal?" than comparing
its string representation with a precomputed one.

In your case it sounds like you'd have to delve a bit deeper, because
you are currently making assumptions about the representation of the
ring to even get your example started. However, when you constructed
the example, you probably found your element with certain properties
by doing some computations and tests in the representation that sage
gave you. You could make those computations part of your test.

I realize that especially in cohomology rings, there is so much choice
in representation that making all those computations part of the test
might be unreasonable for a test-writer (I suspect almost certainly
not for testing time), so it's a judgement call for you: "Is a truly
robust test worth the effort from the test writer". If you decide
"no", you may want to mark the test with the ambiguity in the test
together with the reason why you left the ambiguity in.

Keep in mind that where subtle changes in behaviour of software occur
between platforms, there may well be similar changes in behaviour of
that software between different versions on the same platform.
Resetting random seeds is useful for reproducing errors. Less so for
reproducing test results.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to