On 2011-04-13 01:00, Michael G Schwern wrote:
> Are you talking about integrity testing, making sure the system still has all
> their pieces and they still work?  Or runtime testing, essentially a
> combination of asserts and logging which run from inside the code to check
> that the system is internally consistent?

Like my favourite Perl promoter says:

<cite>We are in software, I don't know what I want :-)</cite>

Heh, I really don't. :-) I just thought it could be kind of cool to re-run
the/some tests on production even after installing when "things" suddenly starts
to go wrong. So the ideas of what can be actually post-install tested are nice!

> For the former, you can run the shipped module tests on your installed code.
> Run them with prove and leave off the -l.  The two issues there is they may
> want bits of the source directory, so you're best off running it from the
> original source directory.  #2 is the tests might hard code @INC to use the
> source library so you'll have to find and knock those out.

I've tried to create a dh_installtests (Debian helper script) that would install
the "t/" folder to /usr/share/doc/$package/t, the same way the examples of a
distribution are installed [1]. This part worked fine, but when I tried to run
the same tests without the distribution tree most of the tests were failing,
exactly as Michael is telling. It would require a lot of dark unreliable magic
to tweak the tests to run and make them non-destructive.

> For the latter, there is traditional asserts (Carp::Assert,
> Carp::Assert::More), and design-by-contract (Sub::Contract) to start.  I

Looks good!

> experimented with putting tests right in the code with Test::AtRuntime.  The
> results go to a log file which can then be watched for failures.  It has some
> issues: it's a source filter; it will chew up memory needlessly storing test
> history; test functions aren't designed to be fast... but the idea is solid.

I've read a blog post a long time ago about at-run-time testing and one of the
interesting ideas was to test only every Nth call/request/whatever and not every
time. This tests can then run also on production services and with a little bit
of luck and enough time, sooner or later will reveal problematic inputs while
not slowing down the users. Still the "theory" is nice the way of finding the
trouble spots and delivering them from the production servers to the right
people is pretty long :-)

Thank you for the ideas!
Jozef


[1]
http://git.debian.org/?p=debhelper/debhelper.git;a=blob_plain;f=dh_installexamples;hb=HEAD

Reply via email to