We are in the process of evaluating our puppet related test and
release process and interested in knowing what other folks are doing.

We are in a position that is not ideal but is not unique from what I
can tell.   Our current testing process is basically the
responsibility of each person making a change.   Small changes are
committed and pushed to dev/qa/prod in one swoop with the committer
spot checking the results manually.    Larger changes are tested by
running a node against a puppet environment which is pointed to the
change branch and the desired behavior is manually verified.

What we would like to do is start with implementing some basic control
points which require passing tests before the changes move along.
With the goal of being able to increase the test coverage over time to
protect ourselves from ourselves.

One thought we had as an initial step is to just verify catalog
compilation for some number of nodes against the proposed changes and
block the changes if catalog compilation fails.   This raises the next
question around tooling.   We could script up a catalog compiler test
calling the the puppet binaries but should we use this as an
opportunity to get familiar with rspec-puppet?

Are people using catalog diffs at all in their release process?   It
would seem nice to provide an automated catalog diff for people making
'small' changes so they can make sure their change didn't accidentally
drop or change a large number of resources.

So please share what you find works or doesn't work at your shop.

TIA>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to