Luke Bigum <luke.bi...@lmax.com> writes:
> In my mind the "purest" way would be to go individual modules for each
> package/service combination. If the only requirement is that you are
> handling the differences between Red Hat and Debian flavours, then a
> module per package/service. These modules would be wholly self
> contained and rely on some of the standard set of Facter facts. And
> then you could publish them :-) It would also avoid future duplicate
> resource declarations where someone's embedded "packageX" into one
> profile, and it clashes with "packageX" in another profile.
>
> I can see the argument for putting package installs and service starts
> into a profile but only if it's global for every operating system. So
> if there was profile::webserver that needed Package[openssl] and that
> was correct for all operating systems, then fine. However if you have
> to start doing conditional logic to find the right name of
> Package[openssl] for Red Hat and Debian, then profile::webserver is
> not the place. profile::webserver is a container of business logic
> that relates wholly and only to your business and your team. The exact
> implementation of Package[openssl] has nothing to do with
> profile::webserver, as long as openssl gets there somehow, that should
> be all you care about at the Profile level. Implementing
> Package[openssl] really depends on the operating system Facts alone,
> and this should be in it's own module... and... all of a sudden your
> profile::webserver is operating system agnostic, which is cool.

I agree 100%.

> Question - why is it taking your team getting annoyed at generating
> boilerplate code? Surely you have some sort of "puppet module create"
> wrapper script or you use https://github.com/voxpupuli/modulesync? If
> you've got so much overhead adding boiler plate code for your boring
> modules then I think you're tackling the wrong problem... If you can
> bring the boiler plate code problem down to 1-2 minutes, it's got to
> only take another 5-10 minutes tops to refactor one package{} and one
> service{} resource out of the profile and into it's own module, and
> then your team argument kind of goes away.

Lately we've been creating a new module every 3-4 weeks. So it's been
faster to copy an existing module, run a perl script that renames the
module, packages, and services, than it would be to write/adapt a script
to generate new modules from a template + parameters. It only takes me a
minute or two to create a new module. The counter-argument is that it
only takes a few seconds to add a "package { 'foo': }" to a profile, and
that a module per package/service leads to a unmanagable set of hundreds
of modules.

While I'm in the camp that separate modules for each package/service are
a good thing, I started this thread in order to double check my opinion.

> Question - why are you writing 120 modules yourself? Are there really
> no other implementations of these things on the Forge or GitHub?

In some cases we've found existing modules, we even use a few.  But in
the general case, we've found it useful to write our own modules so they
have the same "look and feel" i.e. use the same sets of parameters and
facts with the same semantics. Our basic package/service boilerplate is
based off the example42 modules (at least as they were ~2-3 years ago).

   --jtc

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/87bn55d2bj.fsf%40wopr.acorntoolworks.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to