On 05/12/2007, Michael G Schwern <[EMAIL PROTECTED]> wrote:
> This this whole discussion has unhinged a bit from reality, maybe you can give
> some concrete examples of the problems you're talking about?  You obviously
> have some specific breakdowns in mind.

I don't, I'm arguing against what has been put forward as good
practice when there are other better practices that are approximately
as easy and don't have the same downsides.

In fairness though these bad practices were far more strongly
advocated in the previous thread on this topic than in this one.

> Fergal Daly wrote:
> >> Modules do not have a binary state of working or not working.  They're
> >> composed of piles of (often too many) features.  Code can be shippable 
> >> without
> >> every single thing working.
> >
> > You're right, I was being binary, but you were being unary. There are 3 
> > cases,
> >
> > 1 the breakage was not so important, so you don't bail no matter what
> > version you find.
> > 2 it's fuzzy, maybe it's OK to use Foo version X but once Foo version
> > X+1 has been released you want to force people to use it
> > 3 the breakage is serious, you always want to bail if you find Foo
> > version X (and so you definitely don't switch the tests to TODO).
> >
> > You claimed 2 is always the case.  I claimed that 1 and 3 occur.
>
> If I did, that wasn't my intent.  I only talked about #2 because it's the only
> one that results in the user seeing passing TODO tests, which is what we were
> talking about.
>
>
> > I'm
> > happy to say admit that 2 can also occur. The point remains, you would
> > not necessarily change your modules requirements as a reaction to X+1
> > being released. You might, or you might change it beforehand if it
> > really matters or you might not change it at all.
>
> And I might dip my head in whipped cream and go give a random stranger a foot
> bath.  You seem to have covered all possibilities, good and bad.  I'm not sure
> to what end.
>
> The final choice, incrementing the dependency version to one that does not yet
> exist, boils down to "it won't work".  It's also ill advised to anticipate
> that version X+1 will fix a given bug as on more than one occasion an
> anticipated bug has not been fixed in the next version.

As I said earlier though, in Module::Build you have the option of
saying version < X and then when it's finally fixed, you can say !X
(and !X+1 if that didn't fix it).

> Anyhow, to get back to the point, it boils down to an author's decision how to
> deal with a known bug.  TODO tests are orthogonal.
>
>
> >> Maybe we're arguing two different situations.  Yours seems to be when 
> >> there is
> >> a broken version of a dependency, but a known working version exists.  In 
> >> this
> >> case, you're right, it's better resolved with a rich dependency system.
> >
> > I think maybe we are.
> >
> > You're talking about where someone writes a TODO for a feature that
> > has never worked. That's legit, although I still think there's
> > something odd about it as you personally have nothing "to do". I agree
> > it's not dangerous.
>
> Sure you do, you have to watch for when the dependency fixes its bug.  But
> that's boring and rote, what computers are for!  So you write a TODO test to
> automate the process.  [1]

That's back on the other case, I'm only talking about taking an
existing, previously passing test and marking it TODO.

> In a large project, sometimes things get implemented when you implement other
> things.  This is generally more applicable to bugs, but sometimes to minor
> features.
>
> Then there are folks who embrace the whole test first thing and write out lots
> and lots of tests beforehand.  Maybe you decide not to implement them all
> before shipping.  Rather than delete or comment out those tests, just wrap
> them in TODO blocks.  Then you don't have to do any fiddling with the tests
> before and after release, something which leads to an annoying shear between
> the code the author uses and the code users use.

That's all fine.

> There is also the "I don't think feature X works in Y environment" problem.
> For example, say you have something that depends on symlinks.  You could hard
> code in your test to skip if on Windows or some such, but that's often too
> broad.  Maybe they'll add them in a later version, or with a different
> filesystem (it's happened on VMS) or with some fancy 3rd party hack.  It's
> nice to get that information back.

How do you get this information back? Unexpected passes are not
reported to you. If you want to be informed about things like this a
TODO is not a very good way to do it.

I would say you should test if the feature is there. If it is, run the
tests and enable the feature, if not don't run the tests and disable
the feature.

I think conditional enabling of not-so-important features depending on
test results is actually a far better way to do this, although we have
no infrastructure for that at the moment.

> > I'm talking about people converting tests that were working just fine
> > to be TODO tests because the latest version of Foo (an external
> > module) has a new bug. While Foo is broken, they don't want lots of
> > bug reports from CPAN testers that they can't do anything about.
> >
> > This use of TODO allows you to silence the alarm and also gives you a
> > way to spot when the alarm condition has passed. It's convenient for
> > developers but it's 2 fingers to users who can now get false passes
> > from the test suites,
>
> It still boils down to what known bugs the author is willing to release with.
>  Once the author has decided they don't want to hear about a broken
> dependency,  and that the breakage isn't important, the damage is done.  The
> TODO test is orthogonal.
>
> Again, consider the alternative which is to comment the test out.  Then you
> have NO information.

Who's "you"?

If you==user then a failing TODO test and commented out test are
indistinguishable unless you go digging in the code or TAP stream. A
passing TODO is just confusing.

If you==author then the only time a TODO and a commented out test are
distinguishable is when _you_ are running them and studying the
output. Studying the output of tests that have some TODOs passing is
not simple. A far easier way to be notified when Foo starts working is
to write an explicit test for Foo's functionality and run it whenever
you see a new Foo.

> So I think the problem you're concerned with is poor release decisions.  TODO
> tests are just a tool being employed therein.

The point is that you have no idea what functionality is important for
your users. Disabling (with TODO or any other means) tests that test
previously working functionality that might be critical for a given
user is always a poor release decision in my book.

You have no idea what version of Foo they're using or what strangeness
is lurking in their environment. If someone is going to use your
module for real work, they should be able to run the full set of
tests.

The test suite should not vary depending on what the latest uploaded
version of Foo on CPAN does. Perhaps the reporting should vary and
perhaps the toolchain's reaction to failing tests could be made
smarter and that would remove the desire of developers to use TODO in
this way,

F

>
> [1] Don't get to hung up on names, things only get one even though they can do
> lots of things.  I'm sure you've written lots of perl programs that didn't do
> much extracting or reporting.
>
>
> --
> Reality is that which, when you stop believing in it, doesn't go away.
>         -- Phillip K. Dick
>

Reply via email to