I think this is a great idea.  It'll catch a ton of problems that we
otherwise miss, as you say, and I've been bitten a lot by them.  I
think it'll also encourage us to have better log and error hygiene in
general.

It will almost definitely result in a ton of failing tests at first,
so expect to spend a lot of time on that aspect of the implementation.

-- 
http://puppetlabs.com/ | +1-615-594-8199 | @puppetmasterd

On Mar 2, 2011, at 16:03, Daniel Pittman <dan...@puppetlabs.com> wrote:

> So, we have a history of problems turning up where we write tests that
> look like they are passing, then it later turns out that we actually
> caught the exception, emitted a nasty error to the logs, and
> everything *looked* right from the perspective of the test code.
> Which is nasty, especially when that catch gets introduced later in
> the code.
>
> We have a bunch of tests that do this "wrong": they assert that
> specific log methods are never called on specific objects, which will
> fail when implementation changes, especially because we can call log
> methods on anything and it winds up doing the right thing.
>
> I have an experimental, hacky branch where I was playing around with a
> solution to this, which implements a very simple rule:
>
> After every single test in the spec suite, assert that there are no
> log messages at error or higher in the collected logs, before we flush
> them.  If there are, fail hard, and treat it as disaster.
>
> This means that if you have code that *should* emit a log message like
> that you need to deliberately flush those logs, which makes it clear
> that you meant to do that.  Ideally, you should also assert that the
> message was there like you planned.
>
> https://github.com/daniel-pittman/puppet/commits/feature%2F2.6.next%2Fuseful-check-logs-after-every-test
>
> That branch contains my work-in-progress implementation, and some
> associated test fixups.  It is far from production quality, but
> illustrates the point.  During inspection of the 90-ish failures it
> caused, at least a couple looked like they were genuine errors in our
> tests that were missed, not just false positives.
>
> I would love feedback, especially about:
>
> * why this is a terrible, nasty, no-good, bad idea.
> * how to integrate the assertion better into the code.
> * cleaner ways to solve some of the "common false positive" patterns
> in the code.
>
> Thanks,
>     Daniel
> --
> ⎋ Puppet Labs Developer – http://puppetlabs.com
> ✉ Daniel Pittman <dan...@puppetlabs.com>
> ✆ Contact me via gtalk, email, or phone: +1 (877) 575-9775
> ♲ Made with 100 percent post-consumer electrons
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Puppet Developers" group.
> To post to this group, send email to puppet-dev@googlegroups.com.
> To unsubscribe from this group, send email to 
> puppet-dev+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/puppet-dev?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To post to this group, send email to puppet-dev@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-dev+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-dev?hl=en.

Reply via email to