On Sep 20, 2010, at 7:15 AM, David Schmitt wrote:
> On 9/17/2010 11:03 PM, Luke Kanies wrote:
> [...]
>>> node A { y { 1: } } node B { x { 1: } }
>>>
>>> define y() { $next = $name+1 @@x { $next: } Y<<||>> }
>>>
>>> define x() { $next = $name+1 @@y { $next: } X<<||>> }
>>>
>>> If I'm not completely off, this will create lots and lots of
>>> resources as A and B are evaluated alternatively.
>>
>> This might quite possible destroy the universe if resource collection
>> didn't ignore resources exported by the compiling host. Given that
>> they do, though, you'd likely just get flapping and some very pissed
>> coworkers.
>
> I don't think so:
>
> @@file{"/tmp/foo": ensure=>present; }
> File<<||>>
>
> will create a "/tmp/foo" on the applying host. But then again, I don't know
> the internals of the code...
That is true, but if you change the catalog to then have /tmp/bar created on
the host, /tmp/foo will no longer be created.
That is, one compile for a host cannot affect the next compile (because a host
doesn't pull its own resources from the db, only those of other hosts), thus
your cascade of resources can't happen.
>>> The last part might be a little bit off-topic, but I think it does
>>> pertain to the whole "all-nodes-are-part-of-the-system" thinking
>>> that is the motivation for Catalog storage/queries.
>>
>> Yeah, that's a good point - one of the big goals here is to lose the
>> 'nodes sit alone' perspective and really give them membership of part
>> of a larger whole.
>
> Combining external node classification, fact storage and offline-compile
> capability mentally really made that idea click for me. It leads to a mental
> model which contains a single step from definition to Catalog for the whole
> system as opposed to a Catalog for a single node.
I don't think you're quite at the single step phase -- you're probably going to
want to incrementally compile the catalogs still -- but yeah, you're a lot
closer. And you're certainly getting close to that point of being able to say
you have one catalog which includes all other host catalogs, rather than a
bunch of catalogs. :)
> The last missing piece would be a puppetrun orchestrator who could take this
> System-wide Catalog, toposort it and run it on the nodes as necessary. Does
> anyone else see the connection to parallelizing and grouping resource
> application in puppetd?
Yes, that is another (although far from the last, IMO) piece in the puzzle,
although I'd actually split it into two - a compiler process (or pool of
processes, most likely), and then a separate system that notifies individual
hosts that they should pull a new catalog and tracks who's checked in and such.
But yeah, that's the general idea.
--
You can't have everything. Where would you put it?
-- Stephen Wright
---------------------------------------------------------------------
Luke Kanies -|- http://puppetlabs.com -|- +1(615)594-8199
--
You received this message because you are subscribed to the Google Groups
"Puppet Developers" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/puppet-dev?hl=en.