Adam Kennedy wrote:
>> Hmm, that would waste a lot of disk.  Maybe we could cache the
>> tarballs somewhere.  Maybe on the network or even Amazon S3.  Some
>> sort of big repository of modules.  We can call it the Network of
>> Archived Postinstall Crap, NAPC for short!
> 
> This same argument applies to the tests.

Sure does. :)


> Further, you've missed the main point here, which is that I'm suggesting
> saving the post-make state of the tarballs with all build assumptions
> and such in place, not the pristine just-unrolled version.
> 
> We're already "wasting" disk saving the tests, why not waste a bit more
> and make sure we actually save enough to know the tests will work for sure.

I just thought of something.  The point is to test that what's *installed* is 
working, right?  But you're talking about testing at the point of installation. 
 This is different from what's actually installed.  Consider some common ways 
the code you're running can fall out of sync with what was installed.

* Admin A installs and archives the source properly.  Admin B installs by hand 
and doesn't update the archive.  Or the package manager installs a different 
version.  Or any number of module shadowing and overlay scenarios like the 
above.

* The installed module is edited in place.  Either in an attempt to fix a bug, 
add a feature or alter a Config.pm.

* Disk corruption.

* The CPAN shell's "recompile" command is run (such as after an architecture 
change like when going from a PowerPC to Intel Mac like I just did).

Testing the post-make source doesn't do anything for the above scenarios, and 
the first two are rather common.  It gives you a false sense of security.

To be useful and accurate you have to test against the actual code which is 
installed.  Not what you think is installed.

Reply via email to