I'm moving from a haphazard model with all cfengine config and managed
files sitting on disk in one place, and occasionally edited in-place...
to storing the config tree and managed files in our Perforce system.
I'm referencing the relevant articles here:
http://lists.gnu.org/archive/html/help-cfengine/2004-07/msg00014.html
http://www.onlamp.com/pub/a/onlamp/2004/05/13/distributed_cfengine.html
http://madstop.com/
What I'm leaning towards is a push-model initiated udpate of the cfservd
systems, forcing them to checkout commited (final) changes from the RCS.
Files served by cfengine will be kept in a local ramdisk for speed
reasons. The makefile method Jamie Wilkinson mentioned seems apropos, I
like the NIS-like feel of being able to manage things behind the scenes
and then with one command update the served-state, though I don't know
make so will try it with Perl.
The only pitfalls I can forsee moving this direction are:
1 making sure the ramdisk size/state is sane before starting cfservd
(lest you wipe the configs of every connecting host....)
2 handling checkout from the RCS and sanity-checking (make sure
errorcodes from checkout are handled, and overwrite of existing data is
aborted on error)
1 is trivial.
2 probably requires a bit more work that I think I'm expecting, as the
consequences of replicating bad/incomplete/empty checkouts are severe.
Does anyone see (or experienced) any other issues that are likely to
result from this? Any tips or suggestions on how you are handling a
similar situation?
Cheers,
/eli
_______________________________________________
Help-cfengine mailing list
[email protected]
http://lists.gnu.org/mailman/listinfo/help-cfengine