On 9/16/2010 1:26 AM, Luke Kanies wrote:
Hi all,
I've just stuck my proposal for a Catalog Service (which we've been
bandying about internally for a while, and which I've been thinking
about even longer) on the wiki:
http://projects.puppetlabs.com/projects/puppet/wiki/CatalogServiceArchitecture
Interesting read :-) here're a few notes:
* document needs list of proposed functional changes, afaict:
* insert RESTful API between puppetmaster and Catalog storage,
thereby exposing a proper interface
* decouple compilation and catalog serving completely
* btw, using futures, one could compile a "template" catalog
and only insert the changing fact values quickly?
* enrichen search API to cover all resources and complex queries
* implement additional backends
* simple, no external dependencies
* massively scalable, using some nosql solution
* I'm wondering how the flat file based backend will perform in the face
of 100 systems. My intuition says that traditional SQL storage will
remain a viable (perfomance vs. configuration) solution in this space.
* re directly exposing back-end interface: only an artifact of a badly
designed API. If this really becomes a problem, perhaps building a more
complex query, e.g. looking for multiple resource types, might be viable
to avoid strong coupling to the backend
* I'm reminded of a trick i used in the early days to emulate a Catalog
query in the main scope:
case $hostname {
'monitoring': {
# apply monitoring stuff
}
'webserver': {
# install webserver
}
}
Today it looks like an awful hack, but the underlying principle might
prove interesting, even if only to strengthen the case of Catalog
storage by discarding it.
To contrast this with a modern implementation:
class monitoring {
Monitoring::Service<<||>>
}
define monitoring::service::http() {
@@monitoring::service {
"http_${fqdn}_${name}":
command => "check_http",
target => $fqdn,
args => $port;
}
}
class webserver {
monitoring::service::http { $port: }
}
The main difference between the two solutions is the dataflow. In the
first solution, different resources are created from the same
configuration, depending on the environment. In the latter version,
compiling one manifest alters the environment for the other nodes.
Suddenly that sounds so wrong :) If all facts/nodes are available on the
server, shouldn't the puppetmaster be able to compile all Catalogs in
one step? Is the next manifest legal? Discuss!
node A { y { 1: } }
node B { x { 1: } }
define y() {
$next = $name+1
@@x { $next: }
Y<<||>>
}
define x() {
$next = $name+1
@@y { $next: }
X<<||>>
}
If I'm not completely off, this will create lots and lots of resources
as A and B are evaluated alternatively.
The last part might be a little bit off-topic, but I think it does
pertain to the whole "all-nodes-are-part-of-the-system" thinking that is
the motivation for Catalog storage/queries.
Best Regards, David
--
dasz.at OG Tel: +43 (0)664 2602670 Web: http://dasz.at
Klosterneuburg UID: ATU64260999
FB-Nr.: FN 309285 g FB-Gericht: LG Korneuburg
--
You received this message because you are subscribed to the Google Groups "Puppet
Developers" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/puppet-dev?hl=en.