On Thu, Jan 24, 2013 at 12:13 PM, Brian Malinconico <arjes...@gmail.com>wrote:

> Thank you for the feedback. We are actually using haproxy, and will
> undoubtedly use the stock book, it was just the example.
>
> I guess I am confused as to the pattern. I have looked over the nagios
> examples many times but I am still unsure. T
>
> he final example would be how to distribute a database IP without
> hard-coding it.
>
> My understanding of the exported resources would be that the database
> needs to export the configuration file, and the application servers would
> need to import that file. This means that the database box needs to have
> application level knowledge to create the config file needed.
>
>
What am I not understanding? Is the puppet pattern to set the $database_ip
> variable?
>

You don't need to export resource per se, an empty define resource allows
you to export information as data.

define data(
  $value,
) {}

node database {
  @@data { 'ipaddress':
    value => $::ipaddress, #obtained from facter, heck you don't really
need to do this because it's available in puppetdb.
  }
  ...
}

But the problem is collection syntax only gather resource, so you need a
custom function to perform the query and treat the result as data. This is
the gap in the Puppet DSL, and there's no official solution if you want to
treat catalog/puppetdb as source of data rather than resource.

There's another module under RI repo that Dan Bode used as PoC for
openstack modules. It's delivered as a puppet face with matching puppet
functions: https://github.com/ripienaar/ruby-puppetdb

The face provides a way to treat puppetdb as source of data. For example,
for database_ip you can simply ask (and probably more interesting asking
what network, is it production, etc):
$ puppet query node --query '(Package[mysql-server] and
architecture=amd64)' --filter ipaddress

In the manifests this is:
$nodes = unique(query_active_nodes('Package[mysql-server] and
architecture=amd64', 'ipaddress'))

Some caveats:
1. Not really an official PuppetLabs project (i.e. experimental).
2.  It's not really tested at scale and does data filter after getting a
large result set. Recent changes in PuppetDB allows more optimal query so
PuppetDB does the filter, but I'm pretty sure not taking advantage of it
yet.
3. PuppetDB exports on catalog compilation, so your database server might
not be online yet.

Thanks,

Nan

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to