[Puppet Users] Re: Announce: Hiera-Puppet 1.0.0rc1 Available

2012-06-04 Thread Jos Houtman
Hello,

I am trying to install the hiera(-puppet) 1.0 rc's but I am having trouble 
installing from source. 
The goal is a 3.0 puppet installation with hiera. 

The rake file seems to be broken because it fails on the require on the 3rd 
line. 
What am i missing?

Jos

On Wednesday, May 23, 2012 1:39:17 AM UTC+2, Matthaus Litteken wrote:
>
> Hiera-Puppet 1.0.0rc1 is a feature release candidate designed to 
> accompany Puppet 3.0 and Hiera 1.0. 
>
> It includes Puppet functions for hiera and also the puppet backend for 
> hiera lookups. 
>
> Downloads are available: 
>  * Source 
> http://downloads.puppetlabs.com/hiera/hiera-puppet-1.0.0rc1.tar.gz 
>  * Apt and yum development repositories 
>  * Apple package 
> http://puppetlabs.com/downloads/mac/hiera-puppet-1.0.0rc1.dmg 
>
> It includes contributions from the following people: 
> Gary Larizza, Hunter Haugen, Kelsey Hightower, Ken Barber, Matthaus 
> Litteken, and Nan Liu 
>
> See the Verifying Puppet Download section at: 
>  
> http://projects.puppetlabs.com/projects/puppet/wiki/Downloading_Puppet#Verifying+Puppet+Downloads
>  
>
> Please report feedback via the Puppet Labs Redmine site, using an 
> affected version of 1.0.0rc1: 
>  http://projects.puppetlabs.com/projects/hiera-puppet 
>
> Hiera-Puppet 1.0.0rc1 Changelog 
> === 
> Gary Larizza (2): 
>   894a7a4 Fail if a lookup key isn't passed 
>   927de1f Add test coverage for hiera_hash() 
>
> Hunter Haugen (1): 
>   632457e Rubygems is not required to use hiera 
>
> Kelsey Hightower (2): 
>   48bfccb (#14461) Remove Puppet parser functions 
>   a042de4 Revert "(#14461) Remove Puppet parser functions" 
>
> Ken Barber (1): 
>   2df319a (#14124) Load rake tasks directly to fix tests for Ruby 
> 1.9.x 
>
> Matthaus Litteken (6): 
>   cb721c5 Add mac packaging to hiera-puppet 
>   64b7375 Move conf to ext directory 
>   4101d02 Add debian packaging for hiera-puppet 
>   470c5c8 Add Redhat packaging to hiera-puppet 
>   5adc454 Add package task to tasks 
>   1138e65 Updating CHANGELOG for hiera-puppet 1.0.0rc1 
>
> Nan Liu (1): 
>   eb800e4 (#12037) hiera-puppet should support hash values. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/DDBTSk12aE0J.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Puppet/Hiera and Git workflow

2012-05-30 Thread Jos Houtman
As said we handle this a little different.
One branch for development, staging and live.

Development of the puppet code base is done using in puppet
environments, which can be used to test a change across systems in any
of the environments.
Once people consider the change working it is committed and pushed. At
which point we check for lint or syntax errors as a precaution.

Bigger changes are ussually hidden in if/else constructs that allows
us to roll them out gradually.

this works for us, because:
- The development/staging and live environment have a big overlap.
- We accept that there might be a few mistakes.

It is a tradeoff between rigorous testing and development speed.

Jos

On Tue, May 29, 2012 at 10:34 PM, Andy Taylor  wrote:
> The git branches/Puppet environments actually mirror the
> infrastructure. So we have groups of servers. Unstable is just for
> nodes which I test new functionality on, dev is for web developers. So
> it seemed to make sense to mirror the environments in the git
> repository with branches.
>
> On May 29, 9:28 pm, Nigel Kersten  wrote:
>> On Mon, May 28, 2012 at 6:14 AM, Andy Taylor  wrote:
>> > I'm currently trying to work out the best way structure my Puppet
>> > environments and VCS structure. At the moment I'mk working on
>> > something like this:
>>
>> > Three Git repositories (one for modules, one for Hiera, one for node
>> > manifests)
>> > Multiple branches (each branch representing an environment, e.g.
>> > production, dev, testing etc.)
>>
>> > When changes to modules/Hiera are made, the changes will be made to a
>> > testing branch, and then merged up the branches until it hits
>> > production (with the appropriate testing of course). So something like
>> > this:
>>
>> > unstable > dev > testing > production
>>
>> Do you need a distinction between "unstable" and "dev" ? I've often found
>> that I don't need those to be separate stages.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> > What system do you guys use? Any suggestions about the above workflow?
>>
>> > Thanks!
>>
>> > Andy
>>
>> > --
>> > You received this message because you are subscribed to the Google Groups
>> > "Puppet Users" group.
>> > To post to this group, send email to puppet-users@googlegroups.com.
>> > To unsubscribe from this group, send email to
>> > puppet-users+unsubscr...@googlegroups.com.
>> > For more options, visit this group at
>> >http://groups.google.com/group/puppet-users?hl=en.
>>
>> --
>> Nigel Kersten |http://puppetlabs.com| @nigelkersten
>> Schedule Meetings at:http://tungle.me/nigelkersten
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Puppet Users" group.
> To post to this group, send email to puppet-users@googlegroups.com.
> To unsubscribe from this group, send email to 
> puppet-users+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/puppet-users?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Puppet/Hiera and Git workflow

2012-05-29 Thread Jos Houtman
I am setting up a new workflow myself, it will be as follows:

One git repo for modules and manifests, a second for hiera. Branches are 
for features and personal development branches.
I might install forge repositories in a different modulepath to force 
working with the community.

The git repository goes through gerrit and all commits are automatically 
check checked for syntax or lint errors by jenkins before they are merged.
All modules updates might also require a visual code-review, don't know yet.

Different environments (development, staging, etc) are represented using an 
extra hiera lookup level and are also reflected in the node to modules 
mapping thats in place.

This is what works for us.

Jos





On Monday, May 28, 2012 3:14:54 PM UTC+2, Andy Taylor wrote:
>
> I'm currently trying to work out the best way structure my Puppet 
> environments and VCS structure. At the moment I'mk working on 
> something like this: 
>
> Three Git repositories (one for modules, one for Hiera, one for node 
> manifests) 
> Multiple branches (each branch representing an environment, e.g. 
> production, dev, testing etc.) 
>
> When changes to modules/Hiera are made, the changes will be made to a 
> testing branch, and then merged up the branches until it hits 
> production (with the appropriate testing of course). So something like 
> this: 
>
> unstable > dev > testing > production 
>
> What system do you guys use? Any suggestions about the above workflow? 
>
> Thanks! 
>
> Andy 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/03LLU2p-mvMJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Announcing Razor

2012-05-26 Thread Jos Houtman
nice!! 

This has excellent potential, two quick questions without getting hands-on 
first, So please tell me to get handson if that is actually the best way to 
see the maybe obvious.

We are running in an hybrid system of physical hardware and openstack, does 
razor play nice together with openstack? Maybe by building a custom 
openstack image that bootstraps into ipxe?

We also run what i would call a service management database. Which knows 
about the running services and their requirements. Combined with a program 
that monitors the instances and starts/stops new ones as required. I 
imagine extending this to also monitor physical systems and automatically 
replace failed once from the reserve pool.

This would require a little stricter matching then 32GB/Dell/ssd  because i 
have ten of those and only want one. What strategy's could I follow in 
this? query razor for the available systems and create a matching profile 
based on serial from the monitoring program? 

Is there also the option to query external resources to determine extra 
facts? would this be done through the deployment of custom facts?

Great work,

Jos 





On Thursday, May 24, 2012 2:10:14 AM UTC+2, James Turnbull wrote:
>
> Puppet Labs is really thrilled to announce, in conjunction with EMC, our 
> new open source bare metal provisioning tool: Razor. 
>
> Razor is next generation provisioning software that handles bare metal 
> hardware and virtual server provisioning with inventory discovery and 
> tagging, rule-based policy management, and extensible broker plugin 
> integration. It integrates closely with Puppet and Facter. 
>
> The full announcement and a module to install it is on the Puppet Labs 
> blog: 
>
> http://puppetlabs.com/blog/puppet-razor-module/ 
>
> This excellent post from Nick Weaver, the EMC guy behind the original 
> idea, takes you through the history, background and workflow of Razor: 
>
>
> http://nickapedia.com/2012/05/21/lex-parsimoniae-cloud-provisioning-with-a-razor/
>  
>
> And finally - being open source - you can find the code at: 
>
> https://github.com/puppetlabs/Razor 
>
> Regards 
>
> James Turnbull 
>
> -- 
> James Turnbull 
> Puppet Labs 
> 1-503-734-8571 
> To schedule a meeting with me: http://tungle.me/jamtur01 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/V3xVuswtApUJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Developers having access to deploy

2012-03-02 Thread Jos Houtman
Hi,

For deployment we do not usually use puppet. The deployment we do with are 
puppet are for stable in house packages.
This is then done by releasing a new version in our package environment and 
utilizing  ensure => latest for the package type.

But for frequent deployment methods I would personally look towards other 
means of deployment.
We are currently utilizing the python fabric library for deployments.

Jos

On Friday, March 2, 2012 10:42:28 AM UTC+1, Thomas Rasmussen wrote:
>
> Hi 
>
> I'm in the process of looking for a way to have developers deploying 
> on their test systems without intervention of sysadmins, to solve this 
> i'd like to use Puppet (either the OSS version or Enterprise, 
> whichever solves the problem). 
>
> I can manage to only grant access to certain systems and limit the 
> ability to execute puppetd --test, however, the developers would like 
> to create a new version of the application and then this should be put 
> into place instead of the old version, but I can't seem to find a 
> solution to this. 
>
> I was thinking somewhat on the option to issue a command like this: 
> puppetd --test --my-app-version 3.2.1 
>
> And then the puppet manifests will use the my-app-version variable to 
> fetch and deploy this specific version. I know that the manifests 
> should be developed with care, which is also the idea. 
>
> Or what solutions do people use in case where developers should have 
> access to deploy, but not have access to the puppetmaster server? 
>
> hope that this can be done. 
>
> Regards 
> Thomas

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/MA3s32mKkTAJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Best way to test changes?

2012-02-24 Thread Jos Houtman
We have a stable environment and an evironment for every developer.
Upon changes we manually test the change using the different
environments.

We also have alerting on the /var/lib/puppet/state/
last_run_summary.yaml file, which tells us if a manifest did not apply
properly.

Cheers,

Jos


On Feb 23, 2:13 pm, Felix Frank 
wrote:
> Hi,
>
> On 02/23/2012 01:27 PM, Gonzalo Servat wrote:
>
>
>
>
>
>
>
>
>
> > On Thu, Feb 23, 2012 at 11:09 PM, jimbob palmer  > > wrote:
>
> >     I'm worried about making bad changes to a module which will impact
> >     lots of hosts.
>
> >     How can I avoid this?
>
> >     Ideally I'd like every node to run in noop, and then to approve the
> >     changes if they look right.
>
> > Hi Jim,
>
> > We're not currently using this method, but we're planning on using a
> > second Puppet server which will have a copy of the Puppet tree with
> > whatever major changes have been made in development. We run Puppet from
> > cron so every host would continue to point at the master server, but we
> > would connect to specific hosts and try noop against the second Puppet
> > server.
>
> > I'd like to hear how other people manage this sort of thing.
>
> similarly but using 
> environments:http://docs.puppetlabs.com/guides/environment.html
>
> The nodes are made to do the noop run on their own and store their
> reports on the master. A simple script digests the reports.
>
> Cheers,
> Felix

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Parameterized classes override of parameter

2012-02-20 Thread Jos Houtman
Hi group,

I expect parameterized classes to behave simular to types in relation
to overrides, but obviously this is not the case.
Could someone what is should then expect from overriding an parameterized class.

what I have is:

class dns(dns_servers) {
  file{'/etc/resolv.conf':
  content => template("dns/resolv.conf.erb")
  }
}

class role::init {
   class{'dns':
 dns_servers => ['10.100.100.1']
   }
}

class role::loadbalancer inherits role {
   Class['dns'] {
 dns_servers => ['127.0.0.1']
   }
}

node loadbalancer1 {
   include role::loadbalancer
}

I expected resolv.conf to have the 127.0.0.1 address, but it has the
10.100.100.1 address.

Could someone explain to me the rules around parameterized class inheritence.
And if using inheritence to override general use cases for very common
modules is not the way, should all this logic then be put into our
extlookup or something like hiera?


Jos

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: intermodule dependency

2012-01-31 Thread Jos Houtman


> That's one of many reasons to not do that.  Specifically, one should
> employ class inheritance only when it involves overriding resource
> properties of the parent class.

We have a history of using class inheritence to override variables in
template's or add extra functionality the base classes.
Both practices can now be abandoned with the use of parameterized
classes.

Where all you really want is to compose classes (as I infer is the
case you describe), you should
> instead use the 'include' function, the 'require' function, or a 
> resource-style class declaration.

Exactly where we are going.

> If you use class inheritance *only* to override resource properties
> (subclass bodies contain nothing but resource overrides) then I think
> your subclasses will not suffer from the problem you describe.

Hmm, good point, but I really hope we can get away with just
parameterized classes instead of overrides.

> You will already have recognized that none of this solves your problem
> directly.  Effective dependency management requires discipline,
> planning, documentation, and the occasional large-scale refactoring.
> Correct use of class inheritance fits in mostly by shaping the
> expectations and practices of manifest developers, rather than by
> providing tools to directly control ordering.  I hope you nevertheless
> find this useful.

This sure is usefull, talking helps to understand the problem and the
pro's/con's of different solutions.


Jos Houtman

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: intermodule dependency

2012-01-31 Thread Jos Houtman

> In general, I try and think of module dependencies and organization as
> a matter of composition.  Discrete modules themselves should avoid
> establishing relationships with other modules.  A module should,
> however, be diligent about managing the internal relationships of the
> classes and resources it defines.
>
> Ideally, Puppet itself would be more opinionated about the
> relationship of modules, and we're moving in that direction.  Kelsey
> Alexander is working with Matt Robinson on things like #11979 [1]
> which gets us on the path to managing dependencies automatically.  In
> the meantime, I document and leave it up to the end user to create
> another class that composes modules together.  With your example, I'd
> do something like this as the end user.  As the module author, I'd try
> and write mysql and ldap as if the other didn't exist.

I would love doing that and the proper way forward with this seems to
me to be stages.
Which leads me to the question, when does one use stages and when to
use dependency's between classes.
Stages look like they have great potention, but there use seems to be
lacking.
My stab at an answer would be:
Dependency between classes should be used within modules while stages
should manage the ordering between modules.

> # /site/manifests/dbserver.pp
> # This is what it means to be a database at the Acme.com site.
> class site::dbserver {
>   class { ldap: }
>   -> class { mysql: }
>
> }

In our case, without stages, This could easily grow to an dependency
chain of 20  to 30 classes.

>
> In this situation I'd update the composition in the site module.  I'd
> also avoid inheritance if at all possible, but that's another story
> for another thread.

Agreed, to avoiding inheritence.


> For these reasons I try and think of the whole thing as a composition
> problem.  The things being composed, modules, should try and avoid
> knowing implementation details about each other.

In general I agree with this, but I do think it would be usefull if a
module can expose functionality to other classes through defines.
The main reason for this is locality, mysql needs the firewall rule so
it defines the firewall rule. This improves the self-documenting
nature of the code, gives a clear overview of what the options are
when you want a mysql daemon (firewall, performance-monitoring,
alerting, maintenance crontabs) through an parameterized mysql class,
and simplifies the node definitions.

But this practice disagree's with the use of stages or would require
something like a staged_definition, a definition that can be called
from any stage but is run in a specified stage.

One example of this would be an iptables module which offers a define
that can be used by other modules to input rules for the firewall.
It could go a little something like this.

iptables.pp:
class iptables {
   file{'/etc/firewall/baserules':}

   service{"firewall":
  ensure => running,
   }
}

define open_internal_port(port) {
   file{"/etc/firewall/${name}":
   content = template("internal_rule.erb")
   }
   File["/etc/firewall/${name}"] ~> Service['firewall']
}

define open_external_port(port) {
   file{"/etc/firewall/${name}":
   content = template("external_rule.erb")
   }
   File["/etc/firewall/${name}"] ~> Service['firewall']
}

mysql.pp
class mysql( $firewall_mode = "closed" ) {
   case $firewall_mode {
  'closed' => {}
  'internal' => { open_internal_port(3306) }
  'external' => { open_external_port(3306) }
  default => { fail("no valid firewall_mode specified: $
{firewall_mode}") }
   }
}


>
> This sounds a lot like the Anchor pattern [2].  Are you trying to
> accomplish the same goal of classes encapsulating other classes in the
> dependency graph?

After reading the wiki, it is indeed the anchor pattern.


> Not easily.  =)  The anchor pattern is a bug, certainly, and so is the
> UX of having to manage dependencies so carefully and painfully.  The
> good news is that we're actively working on it.  If you could update
> any of the following bugs with your user stories and desires it will
> help shape the solution.  11832, 12243, 12246, 12249, 12250, 12251,
> 12253, 12255, 12256, 12257, 12258, 12259, 12260,

I will have a look at adding my user stories to those tickets.


Jos

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] intermodule dependency

2012-01-29 Thread Jos Houtman
>
>
> In this case, the link between the differring blocks should be
> externalized from your ldap module (e.g. the ldap module should care
> about stuff related to ldap.. not about relations to other modules).
>
> You could put the order declaration in a "node type" or "node role" kind
> of class that you include in your node.
> say:
>
> class mysql_server_role {
>  include ldap_authentication_role # which declares whatever is needed
>   # for ldap support
>  include mysql
>
>  Class['Ldap'] -> Class['Mysql']
> }
>

I see your point and we allready have this role setup in place.
But also maintaining ordering declarations for every module that we include
is gonna be a painfull, especially if you have, as we, widespread use of
common defines that also impose their own ordering.

A system simular to stages, but with the ability to assign stages to more
then just a class

This would allow me to do the following:

define portage_useflag_override() {
  File{ "":
 stage => useflag
  }
}

define hyves_package($category) {
   package{ "":
 stage => package
   }
}

Class portage {
  exec {"emerge --sync": }
}

Class ldap {
   hyves_package{'nss-switch':
}

   notify{"done ldap": }
}

Stage['portage-setup'] -> Stage['useflag'] -> Stage['package'] ->
Stage['ldap'] -> Stage['main']

Class{"portage": stage => portage-setup; }
Class{"ldap": stage => ldap; }

portage_useflag_override{"test":}

And have an execution order like this one:

exec['emerge --sync'] -> portage_useflag_override["test"] ->
hyves_package["nss-switch"] -> notify["done ldap"]


The two foremost gains with such an approach would be that cross-module
ordering could worked out using stages.
While the few modules that deliver common services used by other modules,
through defines, can work without complicated, possibly cross-stage,
dependency chains.

Regards,

Jos
 and not dependency chains that would need to cross stages.


>
> --
> Gabriel Filion
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Advice/Best practices inter-module dependencies

2012-01-26 Thread Jos Houtman
Hello list,

I am looking for advice/best-practices on how to handle inter-module
dependencies.
We have a fairly large/complex code base (100+ modules) with a lot of
history (we started at 0.24) and lately we have taken into looking how we
can improve the quality of the codebase.
Parametrized classes, the style guide are all quick wins and no-brainers.

But we have some inter module dependencies, mostly because of ordering, for
which a proper design pattern is more elusive.

A good example is our ldap setup, this setup needs to happen after the
initialization of our packaging system.
It also has to happen before a lot of the other modules, because ldap
provides the details for some of the file owners/groups that are used.

We have experimented with a few methods of getting this setup, but
have always found significant drawbacks.

Without stages we tried three ways of doing this:
Creating a dependency chain between classes.
Class['Ldap'] -> Class['Mysql'].
This is very easy to do, but doesn't work if we inherit from Ldap,  say:
 class ldap::server inherits ldap
The ordering between ldap::server and Mysql is not guaranteed.
It also requires the maintainer of the ldap module to know about all
modules that depend on ldap and update them if he decides to inherit. A
task that is likely to be forgotten.

Creating a dependency chains between resources in the modules, f.e.
notify's.
Every module that is part of an dependency defines an  notify{ 'endpoint':
} and makes sure that everything within the module is executed before the
notify.
If we inherit from the base class, the overriding class is responsible for
making sure that endpoint is still the last thing executed in this module.
Making it more likely that the ordering of events will remain as we want it
after a continued year of development.
But because of assumptions about out base image, and the rarity of
reinstalls. it is easy to forgot the requirements in modules that actually
need them,
Leading to some subtle bugs where the first puppet run on a fresh install
might not work but subsequent runs do.
Luckily execution is now in fixed-order, otherwise that would have been a
problem as well.

The third is the use of stages for the ordering of actions, but this seems
to be an all or nothing approach, and the result is a very splintered
module.
For example, our packaging setup is quite complex. First we initialise the
packaging system and configure all the default package source, then custom
sources could be configured on top of that we allow (un)masking of specific
package versions.
And after all this one can install a package.
We could define 4 stages and each module that needs to do one of these
actions would need to run classes in the designated stage, this results in
some very splintered packages.


Or we could define only 2 stages and have the base setup run before
everything else and then wrap all other actions with defines that specify
the ordering between them using some self-build ordering mechanism based on
notify's or classes.
A problem with this would be that those defines could only be used in the
main stage, because of the built-in ordering. Modules adding more stages,
like ldap, would need to do something custom for installing the required
packages, which again makes maintenance of the package module more
difficult to do right.


So after this rather longer email explaining our problem and some of the
options we explored, how do you guys handle these kind of complex
inter-module dependencies?


Cheers,

Jos Houtman

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] intermodule dependency

2012-01-26 Thread Jos Houtman
Hello list,

I am looking for advice/best-practices on how to handle inter module
dependency's.
We have a fairly large/complex code base (100+ modules) with a lot of
history (we started at 0.24) and lately we have taken into looking how we
can improve the quality of the codebase.
Parameterized classes, the style guide are all quick wins and no brainers.

But we have some intermodule dependency's, mostly because of ordering, for
which a proper design pattern is more elusive.

A good example is our ldap setup, this setup needs to happen after the
initialization of our packaging system.
It also has to happen before a lot of the other modules, because ldap
provides the details for some of the file owners/groups that are used.

We have experimented with a few methods of getting this setup, but
have always found significant drawbacks.

Without stages we tried three ways of doing this:
Creating a dependency chain between classes.
Class['Ldap'] -> Class['Mysql'].
This is very easy to do, but doesn't work if we inherit from Ldap,  say:
 class ldap::server inherits ldap
The ordering between ldap::server and Mysql is not guaranteed.
It also requires the maintainer of the ldap module to know about all
modules that depend on ldap and update them if he decides to inherit. A
task that is likely to be forgotten.

Creating a dependency chains between resources in the modules, f.e.
notify's.
Every module that is part of an dependency defines an  notify{ 'endpoint':
} and makes sure that everything within the module is executed before the
notify.
If we inherit from the base class, the overriding class is responsible for
making sure that endpoint is still the last thing executed in this module.
Making it more likely that the ordering of events will remain as we want it
after a continued year of development.
But because of assumptions about out base image, and the rarity of
reinstalls. it is easy to forgot the requirements in modules that actually
need them,
Leading to some subtle bugs where the first puppet run on a fresh install
might not work but subsequent runs do.
Luckily execution is now in fixed-order, otherwise that would have been a
problem as well.

The third is the use of stages for the ordering of actions, but this seems
to be an all or nothing approach, and the result is a very splintered
module.
For example, our packaging setup is quite complex. First we initialise the
packaging system and configure all the default package source, then custom
sources could be configured on top of that we allow (un)masking of specific
package versions.
And after all this one can install a package.
We could define 4 stages and each module that needs to do one of these
actions would need to run classes in the designated stage, this results in
some very splintered packages.


Or we could define only 2 stages and have the base setup run before
everything else and then wrap all other actions with defines that specify
the ordering between them using some self-build ordering mechanism based on
notify's or classes.
A problem with this would be that those defines could only be used in the
main stage, because of the built-in ordering. Modules adding more stages,
like ldap, would need to do something custom for installing the required
packages, which again makes maintenance of the package module more
difficult to do right.


So after this rather longer email explaining our problem and some of the
options we explored, how do you guys handle these kind of complex
inter-module dependencies?


Cheers,

Jos Houtman

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Advice/Best practices inter-module dependencies

2012-01-26 Thread Jos Houtman
Hello list,

I am looking for advice/best-practices on how to handle inter-module 
dependencies.
We have a fairly large/complex code base (100+ modules) with a lot of 
history (we started at 0.24) and lately we have taken into looking how we 
can improve the quality of the codebase.
Parametrized classes, the style guide are all quick wins and no-brainers. 

But we have some inter module dependencies, mostly because of ordering, for 
which a proper design pattern is more elusive.

A good example is our ldap setup, this setup needs to happen after the 
initialization of our packaging system.
It also has to happen before a lot of the other modules, because ldap 
provides the details for some of the file owners/groups that are used.

We have experimented with a few methods of getting this setup, but 
have always found significant drawbacks. 

Without stages we tried three ways of doing this:
Creating a dependency chain between classes.
Class['Ldap'] -> Class['Mysql'].
This is very easy to do, but doesn't work if we inherit from Ldap,  say: 
 class ldap::server inherits ldap
The ordering between ldap::server and Mysql is not guaranteed.
It also requires the maintainer of the ldap module to know about all 
modules that depend on ldap and update them if he decides to inherit. A 
task that is likely to be forgotten.

Creating a dependency chains between resources in the modules, f.e. 
notify's. 
Every module that is part of an dependency defines an  notify{ 'endpoint': 
} and makes sure that everything within the module is executed before the 
notify.
If we inherit from the base class, the overriding class is responsible for 
making sure that endpoint is still the last thing executed in this module. 
Making it more likely that the ordering of events will remain as we want it 
after a continued year of development.
But because of assumptions about out base image, and the rarity of 
reinstalls. it is easy to forgot the requirements in modules that actually 
need them,  
Leading to some subtle bugs where the first puppet run on a fresh install 
might not work but subsequent runs do.
Luckily execution is now in fixed-order, otherwise that would have been a 
problem as well.

The third is the use of stages for the ordering of actions, but this seems 
to be an all or nothing approach, and the result is a very splintered 
module. 
For example, our packaging setup is quite complex. First we initialise the 
packaging system and configure all the default package source, then custom 
sources could be configured on top of that we allow (un)masking of specific 
package versions.
And after all this one can install a package.
We could define 4 stages and each module that needs to do one of these 
actions would need to run classes in the designated stage, this results in 
some very splintered packages. 


Or we could define only 2 stages and have the base setup run before 
everything else and then wrap all other actions with defines that specify 
the ordering between them using some self-build ordering mechanism based on 
notify's or classes. 
A problem with this would be that those defines could only be used in the 
main stage, because of the built-in ordering. Modules adding more stages, 
like ldap, would need to do something custom for installing the required 
packages, which again makes maintenance of the package module more 
difficult to do right.


So after this rather longer email explaining our problem and some of the 
options we explored, how do you guys handle these kind of complex 
inter-module dependencies?


Cheers,

Jos Houtman

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/zn97r8lyAtwJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Facter - the future - your input needed

2009-01-30 Thread Jos Houtman

Hi,

I have only developed an ip/interface fact that uses ip2 instead of
ifconfig. In order to detect multiple ip's per interface.

And my conclusion was that I really wanted some complexer data structures
and an easier integration with puppet.

jos

On 1/29/09 10:55 PM, "James Turnbull"  wrote:

> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hi all
> 
> We're currently looking at the next release of Facter and the future
> direction of the tool.  I'd like to try and prompt some discussions on
> facter and what people want from it.
> 
> As a starter here's some (although not all) of the ideas we'll be
> working through:
> 
> 1.  Namespaces - add a namespace or tiered namespace to Facter, i.e.
> network -> interface -> ipaddress.
> 2.  Additional output formats - JSON, XML? (winces) - Facter already
> outputs in YAML.
> 3.  Additional collection mechanisms, for example the ability to
> specify a fact file, /etc/facter.conf, containing fact name=value pairs.
> 4.  A more Ruby DSL for facts
> 5.  Rich data structures/values in facts
> 
> If you have additional ideas/requirements/issues/comments we'd welcome
> feedback.
> 
> Regards
> 
> James Turnbull
> 
> - --
> Author of:
> * Pulling Strings with Puppet
> (http://www.amazon.com/gp/product/1590599780/)
> * Pro Nagios 2.0
> (http://www.amazon.com/gp/product/1590596099/)
> * Hardening Linux
> (http://www.amazon.com/gp/product/159059/)
> 
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.7 (Darwin)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
> 
> iD8DBQFJgiXA9hTGvAxC30ARArNJAKCj418VAL75gadKW52gJXSi8JUOywCguDFe
> QPHAtJgrArGhBch4t+7QhrU=
> =sNRM
> -END PGP SIGNATURE-
> 
> 
> > 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Accessing files in custom function relative to the current module

2008-12-16 Thread jos houtman

Hi list,

Iam working on a module for ganglia that builds two config files.

The first is an aggregated config file and contains entry's like
this:

group   "clustername"   node1   node2
group   "clustertwo"   node4 node5

The second file only contains the entry for the intented cluster, in
the case of clustertwo this would be:

send_channel node4
send_channel node5

I intend two solve this using two custom functions that read the same
config file.
The first returns a list of nodes for the requested cluster.
The second returns the list of clusters with there nodes.

But how can i get the current module path. so that i can access the
config file that is in modules/ganglia/config/.
so I want something like this:  IO.readlines( puppet.prefix() +
"config/ganglia-config.cfg" )

Is this possible?


With regards,

Jos Houtman

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---