On Mon, 2012-05-21 at 15:39 -0600, Deepak Giridharagopal wrote:

> 1) The data stored in PuppetDB is entirely driven by puppetmasters
> compiling catalogs for agents. If your entire database exploded and
> lost all data, everything will be 100% repopulated within around
> $runinterval minutes.

I think that this is a somewhat dangerous line of thinking.  Please
correct me if my understanding of storedconfigs are wrong, but if I am
managing a resource with resources { 'type': purge => true } (or a
purged directory populated file resources) and any subset of those
resources are exported resources then, if my "entire database exploded",
would I not have Puppet purging resources that haven't repopulated
during this repopulation time?  They would obviously be replaced, but if
those were critical resources (think exported Nagios configs, /etc/hosts
entries, or the like) then this could be a really big problem.

To me storedconfigs are one of the killer features in Puppet. We are
using them for a handful of critical things and I plan to only expand
their use. I'm glad that Puppet Labs is focusing some attention on them,
but this attitude of we can wait out a repopulation has me worried.
Again, maybe I'm misunderstanding how purging with exported resources
actually works, but my experience has been that if you clear the
exported resource from the database so goes the exported record in a
purge situation.

In a slightly different vein, does PuppetDB support a cluster or HA
configuration? I assume at least active/passive must be okay. Any
gotchas to watch for?

Thanks,
Sean

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to