On Wed, 2015-11-25 at 11:55 +0100, Bernd Schmidt wrote:
> On 11/25/2015 03:26 AM, David Malcolm wrote:
> > Consider the case where an assumption that the host is little-endian
> > assumption creeps into one of the bitmap functions.  Some time later,
> > another developer updates their working copy from svn on a big-endian
> > host and finds that lots of things are broken.  What's the ideal
> > behavior?
> 
> Internal compiler error in test_bitmaps, IMO. That's the quickest way to 
> get to the right place in the debugger.
> 
> >> In the end I think I lean towards run everything with automatic
> >> registration/discovery.  But I still have state worries.  Or to put it
> >> another way, given a test of tests, we should be able to run them in an
> >> arbitrary order with no changes in the expected output or pass/fail 
> >> results.
> >
> > That would be the ideal - though do we require randomization, or merely
> > hold it up as an ideal?  As it happens, I believe function-tests.c has
> > an ordering dependency (an rtl initialization assert, iirc), which
> > sorting them papered over.
> 
> What do you hope to gain with randomization? IMO if there are 
> dependencies, we should be able to specify priority levels, which could 
> also help running lower-level tests first.

I don't particularly want randomization, I was just wondering if others
wanted them, given that it's one of the features that test frameworks
tend to provide.

I do want some level of determinism over test ordering, for the sake of
everyone's sanity.  It's probably simplest to either hardcode the order,
or have priority levels.  I favor the former (and right now am leaning
towards a very explicit no-magic approach with no auto-registration,
given the linker issues I've been seeing with auto-registration).

Reply via email to