On Mon, 25 Feb 2008, James Keenan via RT wrote:

> On Mon Feb 25 13:09:04 2008, doughera wrote:
> 
> > 
> > This sounds to me as if it assumes all the tests will be running in
> > order
> > every time.  
> 
> For the most part, this is true.  However, once we work out the kinks,
> this will not be cause for alarm.
> 
> We want our tests to simulate the development of the Parrot::Configure
> object over the course of the configuration steps.  Currently, the
> configuration step tests do that but very imperfectly.  That's because
> as a kludge for having the information from steps 1..(N-1) available, I
> had the individual test files run step 2, init::defaults, and, in
> certain cases, some other steps as well.  You and chromatic have argued
> that that approach is ultimately inadequate, and I concur.
> 
> To remedy this situation, however, we have to test in configuration step
> order and devise a way for configuration step N, once it has succeeded,
> to pass along information about the Parrot::Configure object to step
> N+1.  That means we have to let go of the notion that a test of a
> particular configuration step can be completely self-contained.

Yes.  Exactly.  I have been arguing that point for years.  Every step
can involve triggers, or callbacks, which can invoke arbitrary code
and even change the results of preceeding steps.  It is very much
history-dependent.  I think the best disk cache of such information is
lib/Parrot/Config/Generated.pm, because only that file represents the
best guess of a successful configuration.  Everything else is a tentative
ok-so-far-but-subject-to-change approximation.

I appreciate your explanations of the goals and intent of this design.
It's probably workable; I just don't know if it's worthwhile.  At the
moment, it has the same problem as the previous design -- it's failing
to honor my command line arguments.  Whether that will be easier to
fix in this design or not, I don't know.  I couldn't figure it out
this afternoon.

I can only observe that every layer of complexity is making it *harder*
to debug failing tests.  Having a binary hidden cache file only adds
another layer of complexity.  It's important to remember that 
when Configure (or its tests) fails, it is usually on someone else's
machine where you don't have access and where getting reliable feedback
to fix the problem entails mailing back and forth with a (possibly
inexperienced and probably busy) user who might not be willing or able
to peel back layer after layer of complexity to help solve the problem.

> >     Configure.pl --cc=gcc --link=gcc --ld=gcc --cxx=g++ --test
> > 
> > The first major failure is here:
> > 
> > t/steps/inter_progs-01.........................Can't call method
> > "data" on an undefined value at (eval 20) line 7.
> > # Looks like you planned 11 tests but only ran 2.
> > # Looks like your test died just after 2.
> >  Dubious, test returned 255 (wstat 65280, 0xff00)
> >  Failed 9/11 subtests

> As suggested above, my hunch is that if we can get this step to pass,
> everything after it will pass as well.  Can I assume that, with the
> exception of '--test', the options you call above would normally result
> in a successful configuration?

Yes, they should work.  Or at least they have in the past.

> Thanks for taking the time to examine this.

I'm afraid I won't have time to follow up for quite a while.  But the
key problem appears to be in handling the command line arguments.
Both
    Configure.pl --cc=gcc --link=gcc --ld=gcc --cxx=g++ --test
and
    Configure.pl --cc=bogus --link=bogus --ld=bogus --cxx=bogus++ --test

fail in the same way, which looks very suspicious to me.

-- 
    Andy Dougherty              [EMAIL PROTECTED]

Reply via email to