[Puppet Users] Re: Best practice for Puppet CA servers in multiple Data Centres - upgrading to v6

2019-09-18 Thread Luke Bigum
On Wednesday, 18 September 2019 05:12:49 UTC+1, chris wrote:
>
> Hi Luke,
>
> That's very interesting; thanks.
>
> We do have 2 non-CA puppetmasters in each DC, so you are saying that 
> client servers will continue to be able to call in, but we won't be able to 
> setup any new ones?
>

Yes, and to make doubly sure I just shut down my own CA / Signing Master, 
and an Agent in a satellite DC was able to check in with the local 
Compiling Master fine (because the Agent already has a Puppet cert).  I 
find DNS SRV records useful for managing this:

https://puppet.com/docs/puppetserver/5.1/scaling_puppet_server.html#using-dns-srv-records

Obviously this approach won't work if you're spinning up many short lived 
VMs or disposable infrastructure.

We do only have one puppetdb & foreman in  the main DC.
>

PuppetDB is a different matter...  In theory an Agent should be able to run 
without it, except for if the Compiling Master needs to go to PuppetDB to 
realise any exported resources.  From memory the Agents will complain about 
pushing their Facts into PuppetDB, but this itself does not stop the run - 
I have seen catalog compilations work with PuppetDB offline, but it wasn't 
perfect.  Last time I tried PuppetDB maintenance in hours the after-affects 
annoyed all of my team, so I didn't have the luxury of finding out exactly 
what was reliant on PuppetDB, nor what config options I could use to lessen 
the impact.  Since we use exported resources a lot and these are stored in 
PuppetDB, it makes sense that any catalog reliant on realising exported 
resources would fail.

Intermediate Certs looks a bit fiddly but might be an option. 
> Just to clarify, using these would mean we could also standup new 
> client-servers in the other DCs if the main DC goes down?
>

No, if you've got one CA / Signing Master, any new agent (fresh install) 
would send it's CA signing requests to your Signing Master, also sometimes 
called a Master of Masters.  If you had a critical need you could turn one 
of your existing masters in a DC into a CA, and then fix up the certs later 
- basically destroy and re-add all the Agents once the main DC was back 
online.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/324d63e0-aa81-4729-bebf-416619dfaecc%40googlegroups.com.


[Puppet Users] Re: Best practice for Puppet CA servers in multiple Data Centres - upgrading to v6

2019-09-17 Thread Luke Bigum
It depends on how often you build "new" machines, or if you think you'd 
need to bootstrap new Puppet Agents if your DCs were cut off from one 
another.  I get away with 1 CA for your entire estate and with multiple 
redundant compile masters at each DC.  That way you don't need to sync 
certificates around, you'll only need to contact the CA the first time an 
Agent checks in.  This is simplicity but with a point of failure.  You're 
probably going to have one PuppetDB anyway (or postgres cluster in one 
location)?

To do it properly though, I think you would need each Puppet Server to have 
it's own intermediate CA, all signed from a common root CA of yours:

https://puppet.com/docs/puppetserver/5.2/intermediate_ca_configuration.html


On Tuesday, 17 September 2019 07:08:39 UTC+1, chris wrote:
>
> Hi Guys,
>
> so we've got a few data centres spread across the world and are looking to 
> upgrade from Puppet v4 to Puppet v6.
>
> At the moment we just have the one CA in the original DC (fast growing 
> company).
>
> I like the idea of having a separate CA in each DC and having the "local" 
> machine use that - simples .. ;)
>
> However, I'd like to know if there are any sane alternatives as I'll need 
> to persuade the rest of the team/mgrs.
> Is it  possible/sane to just build a CA in each DC but have it not active 
> and then rsync the certs across every hour/day  from the active CA & bring 
> it up if (ie when)  the main CA/DC goes away.
>
> Are there any other sensible ideas out there?
> Ideally, what is the recommended best practice by Puppet (we are on the  
> FOSS version, so I can't ask them).
>
> FWIW, we use Foreman to keep an eye on stuff & I believe(?) it could be 
> tricky to have multiple CAs talking to it ??
> (I know nothing about how the foreman - puppet cxn works).
>
> Cheers
> Chris
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/b0bccffa-03a6-4a6e-b03c-067772d91cee%40googlegroups.com.


[Puppet Users] Re: Roles and profiles dissent

2019-08-03 Thread Luke Bigum

On Saturday, 3 August 2019 02:03:29 UTC+1, Chris Southall wrote:
>
> Hi Luke.  Thanks for a thoughtful and detailed response.
>
>
You are most welcome.

 

> I'd like to think I grasp the roles/profiles concept, but am just not 
> convinced it's a better approach.  Abstracting away configuration details 
> and exposing a limited set of parameters results in uniform 
> configurations.  In doing so it also seems it limits flexibility and 
> ensures that you'll continue to spend a good deal of time maintaining your 
> collection of profiles/modules.
>

Absolutely.  One of the key points I never told my team was I was enforcing 
a style that'd make it harder for them to change anything.  This is purely 
entropy reduction.  On a very slow day LMAX Exchange trades US$ 100,000 a 
second, so it's very important to me that say, for example, someone doesn't 
have the power to easily mess up the indentation on one line of YAML... 
There by breaking a big hash in Hiera... Which is being used to generate a 
long list of puppetlabs-firewall rules... Which ends up removing half the 
firewall rules on a machine.  That of course never happened (¬_¬) ... 


Speaking of hiera tricks, I created an exec resource with the command 
> defined as a multi-line script to include variables and function 
> declarations.  I use this to collect data and create local facts.  The next 
> puppet run creates additional resources based on the presence of these 
> facts.  This is basically the same as creating a module with external 
> facts, but doesn't require a module.  An upside is that the fact script 
> doesn't need to execute on every puppet agent run, with the downside being 
> that the host takes a second puppet run to create all resources.  I'm not 
> sure if I should be proud or ashamed of what I did, but it works!
>

Two-pass Puppet is often hard to get away from 100%.  If it works, and your 
team can understand it / debug it, that's probably more important.


This may be the greatest factor to influence the decision.  In my case we 
> have 2 people working with puppet, and the system we're building is to be 
> handed over to team with little to no puppet experience.  This system runs 
> at a single site with only a couple hundred managed nodes and maybe a 
> couple dozen unique configurations. 
>

It sounds like your team size fits your design choice.  There's one aspect 
of the Role Profile pattern that relates to what you say above that I 
haven't talked about (because I don't do it).  It's actually one of the 
core principals in Craig Dunn's early presentations.  When encapsulating 
business design in Profiles, you create an interface for how that business 
deliverable can be changed (in my example, a Statistics Collection 
Server).  It's possible to give people outside the Puppet team the ability 
to configure that interface in standard ways.  The core Puppet people 
produce well tested Profiles that, say, the web developers consume and 
configure for their purposes.  The Web developers only know a little bit of 
Puppet but they can do basic things like change some web server settings by 
tweaking the parameters of the Profiles given to them by the core Puppet 
people.

What this looks like in practice is the web developers either having some 
level of access to Hiera (eg: they can write to a level of the Hierarchy 
that's lower priority than the Puppet team), or partial write access to a 
Puppet ENC.  If you are a team of two building everything right now and 
handing over to another team, what you might want to do in future is allow 
this other team a bit more self-service to make their own changes.  You of 
course still need to be in control of the standard build, but you're not 
doing every little thing for them.  This would work well in a company where 
various teams can spin up instances of their own cloud infrastructure.

Well, you have caused me some guilt that maybe I've taken the easy way out 
> rather than becoming more proficient with puppet.  Once you've had that 
> first hit and instant high from the hiera crack pipe... it's hard not to go 
> back.
>

>From what you've explained about your company, I think your choice of style 
is appropriate right now.  The only thing I can stress is don't this become 
a limitation in three years when your company grows.  The cost of an 
operational fault is also a factor.  If it's relatively inexpensive to fail 
or break something, fix it, and race on, then optimising for speed of 
delivery makes perfect sense.

Rob's suggestions of learning the Puppet 4/5/6 DSL functions that replace 
create_resources() are a great starting point.  It's sometimes a hard thing 
to grasp (I have to re-read the Puppet Docs on each function quite often), 
but if you can master the map(), reduce() and each() functions, you'll 
learn quite a lot of data manipulation tricks. Then if it becomes more 
efficient for you down the line, you can begin moving some of your business 
logic out of Hiera and 

[Puppet Users] Re: Roles and profiles dissent

2019-08-01 Thread Luke Bigum
Hi Chris,

Quite a similar question was posted about two weeks back, you might find 
that very interesting:

https://groups.google.com/forum/#!topic/puppet-users/NW2yuHMJvsY

On Thursday, 1 August 2019 17:01:44 UTC+1, Chris Southall wrote:
>
> Our site is using a collection of puppet modules to manage various Linux 
> components using the roles and profiles model.  While it works OK for the 
> most part, I often find it necessary to update a module or profile for some 
> reason or other.  Modules obtained from puppet forge sometimes don't quite 
> do what is needed, and writing good quality modules on your own can be a 
> challenge.  
>


There was another recent post about using Forge modules or importing the 
Puppet code into a personal Git repository directly:

https://groups.google.com/forum/#!topic/puppet-users/vcp-pVYC8b0

If you are a confident Puppet Coder, you might prefer to import the source, 
patch the module to add your feature, then submit the patch back upstream.

 

> When using roles and profiles you end up declaring all the module 
> parameters again to avoid losing functionality and flexibility.
>

... Not sure I agree with that statement.  That sounds odd.  Why would you 
be re-declaring module parameters if you're not changing something from the 
defaults?  And if you are intending to change something, then of course you 
are supplying different parameters?

 

> You also need to be familiar with all the classes, types, and parameters 
> from all modules in order to use them effectively.
>

Ideally the README page of a module would contain amazing user level 
documentation of how the module should work... but not that many do.  I 
often find I have to go read the Puppet code itself to figure out exactly 
what a parameter does.
 

> To avoid all of the above, I put together the 'basic' module and posted it 
> on the forge:  https://forge.puppet.com/southalc/basic
>

Ok :-) I'm beginning to see what the core of your problem is.  The fact 
that you've created your own module to effectively do create_resources() 
hash definitions says to me that you haven't quite grasped the concepts of 
the Role / Profile design pattern.  I know I have a very strong view on 
this subject and many others will disagree, but personally I think the Role 
/ Profile pattern and the "do-everything-with-Hiera-data" pattern are 
practically incompatible.

This module uses the hiera_hash/create_resources model for all the native 
> puppet (version 5.5) types, using module parameters that match the type 
> (exceptions for metaparameters, per the README).  The module also includes 
> the 'file_line' type from puppetlabs/stdlib, the 'archive' type from 
> puppet/archive, and the local defined type 'binary', which together provide 
> a simple and powerful way to create complex configurations from hiera.  All 
> module parameters default to an empty hash and also have a merge strategy 
> of 'hash' to enable a great deal of flexibility.  With this approach I've 
> found it possible to replace many single purpose modules it's much faster 
> and easier to get the results I'm looking for.
>

A Hiera-based, data-driven approach will always be faster to produce a 
"new" result (just like writing Ansible YAML is faster to produce than 
Puppet code)...  It's very easy to brain dump configuration into YAML and 
have it work, and that's efficient up to a certain point.  For your simple 
use cases, yes, I can completely see why you would be looking at the Role 
Profile pattern and saying to yourself "WTF for?".  I think the tipping 
point of which design method becomes more efficient directly relates to how 
complicated (or how much control) you want over your systems.

The more complicated you go, the more I think you will find that Hiera just 
doesn't quite cut it.  Hiera is a key value store.  You can start using 
some neat tricks like hash merging, you can look up other keys to 
de-duplicate data... When you start to model more and more complicated 
infrastructure, I think you will find that you don't have enough power in 
Hiera to describe what you want to describe, and that you need an 
imperative programming language (eg: if statements, loops, map-reduce).  
The Puppet DSL is imperative.

Yes, the hiera data can become quite large, but I find it much easier to 
> manage data in hiera than coding modules with associated logic, parameters, 
> templates, etc.  Is this suitable for hyper-scale deployment?  Maybe not, 
> but for a few hundred servers with a few dozen configuration variants it 
> seems to work nicely.  Is everyone else using puppet actually happy with 
> the roles/profiles method?
>

 If you are only making small-to-medium changes to a standard operating 
system, and/or your machines are short-lived cloud systems that get thrown 
away after half an hour, then sure, a Hiera-only approach will work fine at 
the scale you are suggesting.

I also think team size and composition is a big factor.  If I was in a team 
o

Re: [Puppet Users] How do you keep the forge modules you use up to date (and keep your sanity)

2019-07-20 Thread Luke Bigum
On Wednesday, 10 July 2019 18:39:44 UTC+1, Martin Alfke wrote:
>
> Hi,
>
> we never use the puppet module tool.
> Instead we mirror upstream modules on an internal git server (including 
> tags) and reference module, git url and tag in a control-repository 
> Puppetfile.
> When we want to upgrade modules we create a branch and veriffy that 
> everything still works as expected.
> We sometimes even use the octocatalog-diff tool to verify catalogs build 
> with old and new module versions.
>
> hth,
> Martin
>


We do the same as well - we mirror into an internal Git Lab, and use r10k 
to Git clone the upstream modules onto Puppet Masters like they were our 
own internal ones.  This also helps with our speed; usually when one of our 
engineers finds a problem in an upstream module that's not fixed in the 
latest commits, and they want the fix *now*.  So we patch it internally 
immediately then submit that patch upstream, where more often than not it 
sits for 1-2 weeks waiting for someone to review and merge :-)

You still have to pay attention to dependencies and the release notes - 
sometimes we've done big version jumps and missed some sort of functional 
change that's bitten us (but that would happen with Forge modules anyway).  
There's only a handful of modules we pull directly from the Forge, and 
those are modules that don't have a one-to-one mapping of Git repo to Forge 
module (such as R.I.'s Choria agent modules).

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/cb00e7de-916d-473c-af00-1dfa46874042%40googlegroups.com.


[Puppet Users] Re: Puppet Module Best Practice (Roles/Profiles)

2019-07-20 Thread Luke Bigum
On Friday, 19 July 2019 01:59:26 UTC+1, Lesley Kimmel wrote:
>
> Hi all;
>
> I told him if it was the right way then all the smart people working with 
> and developing Puppet would have put it out as the best practice. However, 
> I can't seem to come up with a really great scenario that will convince 
> him. Can anyone share thoughts on scenarios where this patter will blow up 
> [hard]?
>
> Thanks!
>


Everyone else's replies touch on all the major points, so can't add too 
much more there.  I do have an ageing blog post though, and it might be 
helpful... The post describes a contrived scenario based on a real-world 
problem; it starts with an engineering team working mostly out of Hiera 
YAML files, and then over time getting into a situation where that design 
pattern was causing too much pain, and thus the conversion to a 
role-profile pattern.  All the internal technical secrets are obfuscated 
away behind this contrived scenario, but something very close to this 
happened to us - a medium sized company that's been using Puppet for 8+ 
years.

The problem being solved is to do with creating networking on Linux 
servers, so it's a great deal more complicated than just adding a package 
to a machine. However if you can understand the concepts and the challenges 
faced, it might work as a real-world example for your Junior staff 
member... Maybe... 30% chance :-)

The first half of the page is the scenario / blog post (which is the bit 
you want), the next is the docs for the example module itself describing 
our solution (which won't be useful to you):

https://github.com/lukebigum/puppet-networking-example

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/73e43d92-bfec-4f67-a829-3d28953b12c9%40googlegroups.com.


Re: [Puppet Users] converting Puppet reports to JUnit

2019-05-10 Thread Luke Bigum
On Friday, 10 May 2019 14:04:33 UTC+1, Henrik Lindberg wrote:
>
> I remember using a JUnit compatible report format plugin for rspec. 
> Maybe that is what you are looking for? 
>
> This was quite some time ago and I don't remember its name. 
>

Sort of. I also looking into rspec report formats as that would plug in 
better to rspec-puppet / beaker-puppet.  It's more to take the raw Puppet 
report YAML (https://puppet.com/docs/puppet/6.4/format_report.html) and 
converting it to a testing framework report format (an Junit is pretty 
common).  End result would look like any failures in the Puppet run could 
be presented in a CI system as "test" failures.

The YAML's pretty simple, it won't be too hard to write, just thought 
someone might have a great library that already does it...  :-)

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/46c4dd81-5df0-4f38-95bb-20924222f916%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] converting Puppet reports to JUnit

2019-05-10 Thread Luke Bigum
Hello,

Has anyone had the need to convert Puppet's YAML reports into another 
format, such as JUnit XML?  I'm thinking of taking the reports of 
Acceptance test runs of Roles (potentially thousands of resources), and 
parsing them into reports for a CI system.  The report format doesn't look 
too complicated, but before I reinvent someone else's wheel, I thought I'd 
check if there's any code I could steal off someone?

Cheers,

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/a556b4c7-a19e-4a70-a185-02d4e79427a4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] preparing internal fixtures on Beaker VMs for acceptance tests

2019-04-09 Thread Luke Bigum
Hello,

What's the state of the art nowadays for preparing fixtures inside Beaker 
SUTs?  The Beaker module install helper works a treat when all dependencies 
are available on the Forge and listed in the metadata.json file, however it 
doesn't help for internal modules in private SCM repos.

Are people calling r10k or Librarian from inside a Beaker instance to 
download all dependencies?  Are people running internal / private Forges?  
Or are people still doing manual rsync / scp / git cloning repos directly 
into the Beaker instance?

Thanks,

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/6430411a-2dd6-422f-87fe-ad6eb713b470%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Re: New Deferred type and agent data lookups in Puppet 6

2018-08-31 Thread Luke Bigum
On Friday, 31 August 2018 16:41:34 UTC+1, Chadwick Banning wrote:
>
> So for this example, there are some sort of limitations as to what the 
> 'vault_lookup' function is able to do internally? I had just assumed that 
> as long as the function returned a simple value, what the function does 
> internally was open.
>
> As an example, could Deferred be used to read and extract a value from a 
> file agent-side?
>


In theory, you probably could.  The ruby code probably just executes on the 
Agent.  Personally though if I were to find myself in this situation, I'd 
really think about what I was trying to achieve and why I am in this 
situation in the first place...  Perhaps it's Puppet design that needs to 
be refactored, or if I really wanted to "do" something on an Agent, a 
Custom Type / Provider might be a better vehicle.

I can partially see the argument in proceeding posts for making decisions 
on run time environment data...  I'd argue that if you're writing a 
Deferred Type in Ruby, it's not that much further to write a Fact.  Also, 
unit and acceptance testing code that relies on a run time Fact seems very 
difficult.


On Fri, Aug 31, 2018 at 9:12 AM R.I.Pienaar > 
> wrote:
>
>>
>>
>> On Fri, 31 Aug 2018, at 15:03, Chadwick Banning wrote:
>> > Would it be safe to consider this in a general context i.e. as enabling 
>> > agent-side function execution?
>>
>> I dont think so - for general function calls to be usable you want to get 
>> the value and then do some conditional logic on it.  or put it in a 
>> variable and use it in another resource etc.
>>
>> That is not what this is for, this is a based placeholder to later be 
>> replaced by the value - you cannot do any conditionals etc with it.
>>
>> Imagine something like:
>>
>> mysql::user{"bob":
>>   password => Deferred(vault_lookup, "bob_pass")
>> }
>>
>> (I am just making this syntax up, this is presumably not how it will look)
>>
>> Here its fine because its a simple interpolation into a value, you cant 
>> do more complex things with this design.
>>
>> Anyway thats my understanding, Henrik might chime in too
>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "Puppet Users" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/puppet-users/DurqiLnVWMk/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> puppet-users...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/puppet-users/1535721137.3301091.1492516568.3EB7087A%40webmail.messagingengine.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
> -- 
> Chadwick Banning
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/106d796d-1850-44de-9040-cad5bc3e1c8b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Re: custom facts

2018-04-25 Thread Luke Bigum
On Wednesday, 25 April 2018 16:58:10 UTC+1, R.I. Pienaar wrote:
>
>
>
> On Wed, 25 Apr 2018, at 17:52, Luke Bigum wrote: 
> > On Wednesday, 25 April 2018 15:18:13 UTC+1, Michael Di Domenico wrote: 
> > > 
> > > On Wed, Apr 25, 2018 at 10:14 AM, Luke Bigum  > > > wrote: 
> > > > On Wednesday, 25 April 2018 15:01:00 UTC+1, Michael Di Domenico 
> wrote: 
> > > >> 
> > > >> in the past i'd copy my ruby facts into 
> > > >> /usr/share/ruby/vendor_ruby_facter (which probably wasnt right) 
> > > > 
> > > > 
> > > > No... That's definitely not right :-)  Puppet has had a feature 
> called 
> > > > "pluginsync" for a while now, which downloads ruby code (types, 
> > > providers, 
> > > > facts) from a Puppet Master before it does anything on a Puppet 
> Agent. 
> > >  The 
> > > > Agent will write it's downloaded Ruby code into /var/lib/puppet/lib/ 
> > > > (/opt/puppetlabs/puppet/cache/lib in Puppet 5), and it will keep it 
> > > > synchronised so you can't pollute it. 
> > > 
> > > we're using puppet in standalone mode, not server/client. 
> > > 
> > 
> > Perhaps something like this then, though that answer is old, in theory 
> it 
> > should probably work for new Puppet: 
> > 
> > https://ask.puppet.com/question/4645/puppet-apply-and-pluginsync/ 
> > 
>
> In recent Puppet with puppet apply it automatically finds facts in your 
> modules, you dont need to copy them anywhere or sync them. 
>
> I have not really been following this thread sorry if that's not helpful - 
> but it basically just works 
>


I think the RP is trying to run Facter standalone on the command line, as 
he mentions FACTERLIB, but yes, "puppet apply" should just work as is.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/63ed286a-1ae6-49a6-89da-8bb8050532a4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Re: custom facts

2018-04-25 Thread Luke Bigum
On Wednesday, 25 April 2018 15:18:13 UTC+1, Michael Di Domenico wrote:
>
> On Wed, Apr 25, 2018 at 10:14 AM, Luke Bigum  > wrote: 
> > On Wednesday, 25 April 2018 15:01:00 UTC+1, Michael Di Domenico wrote: 
> >> 
> >> in the past i'd copy my ruby facts into 
> >> /usr/share/ruby/vendor_ruby_facter (which probably wasnt right) 
> > 
> > 
> > No... That's definitely not right :-)  Puppet has had a feature called 
> > "pluginsync" for a while now, which downloads ruby code (types, 
> providers, 
> > facts) from a Puppet Master before it does anything on a Puppet Agent. 
>  The 
> > Agent will write it's downloaded Ruby code into /var/lib/puppet/lib/ 
> > (/opt/puppetlabs/puppet/cache/lib in Puppet 5), and it will keep it 
> > synchronised so you can't pollute it. 
>
> we're using puppet in standalone mode, not server/client. 
>

Perhaps something like this then, though that answer is old, in theory it 
should probably work for new Puppet:

https://ask.puppet.com/question/4645/puppet-apply-and-pluginsync/ 

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/ffb6dfa8-7bb3-4d53-91af-2b956810c0b7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: custom facts

2018-04-25 Thread Luke Bigum
On Wednesday, 25 April 2018 15:01:00 UTC+1, Michael Di Domenico wrote:
>
> in the past i'd copy my ruby facts into 
> /usr/share/ruby/vendor_ruby_facter (which probably wasnt right)
>

No... That's definitely not right :-)  Puppet has had a feature called 
"pluginsync" for a while now, which downloads ruby code (types, providers, 
facts) from a Puppet Master before it does anything on a Puppet Agent.  The 
Agent will write it's downloaded Ruby code into /var/lib/puppet/lib/ 
(/opt/puppetlabs/puppet/cache/lib in Puppet 5), and it will keep it 
synchronised so you can't pollute it.

Probably the simplest thing for you to do right now is create a new module 
(call it yourcompany_stdlib ?) and put all your Facts in there.  Custom 
Facts distributed from a module live in MODULEROOT/lib/facter/, here some 
examples from Puppetlabs' stdlib. You just put your .rb files in this 
directory:

https://github.com/puppetlabs/puppetlabs-stdlib/tree/master/lib/facter

Then if you add that module to your Puppet Master, your Agents will 
magically synchronise them down - Puppet 3 and Puppet 5 (you don't need to 
"include" or do anything in a manifest).

I'd link you some puppet.com Docs too, but right now I'm getting 5xx Cloud 
Flare errors.  If it's working for you, look around for how to distribute 
custom facts in a module.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/850309c4-67bc-4091-80aa-b3dbadef6e45%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Generating monitoring from PuppetDB without exported resources

2018-03-16 Thread Luke Bigum
I guess I'm not 100% on what I'm trying to do yet, nor am I sure it's a 
good idea or too complicated... Which is why I'm asking what other people 
do :-)

I already bypass exporting and realising resources for our Nagios service 
checks.  This was a performance enhancement - we've got 10s of 1000s of 
Nagios checks per server, and realising all those resources into Ruby 
objects was really slow (this is back before PuppetServer).  Instead we 
have a template making a PuppetDB API call, getting back a blob of JSON and 
parsing that into Nagios Service definitions.  It is querying Defined Type 
resources from Puppet though so it's pretty easy to parse into Nagios:

  url4 = 'http://puppet:8081/pdb/query/v4/resources/Nagios::Config::Service'

However that requires that we add a Nagios::Config::Service resource into a 
Puppet catalog somewhere in order to get a check.  Some part of me thinks 
this is a bit wasteful... Here's a simple contrived example: if I was 
monitoring PuppetLabs Apache::Vhosts, I would have two resources in a 
catalog:

  apache::vhost { 'foo.com': }
  nagios::config::service { 'https_check_foo.com': }

Why do I need the second resource if all the information I need is already 
in the first resource?  Could I not just parse the PuppetDB data looking 
for Apache::Vhost resources directly?  That would mean I wouldn't have to 
have a Profile of my own code to add my own monitoring resource.  If I had 
something that could do that and generate Nagios config, perhaps it 
wouldn't be too hard to extend it to generate boilerplate Goss or 
ServerSpec config for acceptance testing, the same way Puppet Retrospec 
does for unit tests...  The monitoring config would be somewhat decoupled 
from my Puppet runs, I could change the way checks are defined without a 
Puppet Agent catalog compilation needing to occur.

There are two big disadvantages I can see. If the interface of 
Apache::Vhost changes, the generated monitoring breaks with it.  The second 
is that any complicated monitoring that requires an extra package or script 
to be installed on a machine is going to be defined in Puppet anyway, so 
moving the check definitions out of Puppet in order to avoid wasteful code 
doesn't make much sense any more.  I think I've just talked myself out of 
it :-)

Thoughts?

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/3477c4b2-7607-4dfb-ad6a-c7f4af885877%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Generating monitoring from PuppetDB without exported resources

2018-03-15 Thread Luke Bigum
Hello,

Is anybody doing (or know of someone doing) any advanced parsing of Puppet 
resources from PuppetDB, perhaps for the purpose of generating config for 
centralised monitoring, or, even acceptance/integration tests?  The 
traditional way is to use Exported Resources, but I've been toying with the 
idea of bypassing that and building config straight off data in PuppetDB.  
I'm looking for people who may be doing this, tried something similar, or 
anyone interested in bouncing ideas around.

Cheers,

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/7642ca09-76e7-40d8-970d-05aec7af3b48%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: Unable to use logrotate puppet forge module

2017-08-09 Thread Luke Bigum
Working backwards from https://github.com/voxpupuli/puppet-logrotate...

create_resources() iscalled to create logrotate::rule resources from a Hash 
called $rules - 
https://github.com/voxpupuli/puppet-logrotate/blob/master/manifests/rules.pp 
- 

$rules is inherited from the entry Class[logrotate] 
- https://github.com/voxpupuli/puppet-logrotate/blob/master/manifests/init.pp#L7

There's a typo in hour Hiera key, it should be:

logrotate::rules:

With an "s".


On Wednesday, 9 August 2017 02:39:13 UTC+1, Jagga Soorma wrote:
>
> Hi Guys, 
>
> I am trying to use this logrotate puppet module from the forge 
> (https://forge.puppet.com/puppet/logrotate) and seems like it updates 
> my logrotate.conf file without any issues.  However, I am now trying 
> to add a new logrotate.d rule using hiera and it does not do anything 
> and I was wondering if there is something I am missing.  Here is what 
> it looking like in my yaml file: 
>
> -- 
> classes: 
>   - logrotate 
>
> # logrotate 
> logrotate::rule: 
>   'messages': 
> path: '/var/log/messages' 
> rotate: 5 
> rotate_every: 'week' 
> create: true 
> create_mode: '0644' 
> missingok: true 
> sharedscripts: true 
> postrotate: '/bin/kill -HUP `cat /var/run/syslogd.pid 2> 
> /dev/null` 2> /dev/null || true' 
> -- 
>
> However puppet agent -t --noop and debug don't see it making any 
> changes to logrotate.d.  Does anyone else on this list have experience 
> with this specific module?  Just wondering what I am missing here. 
>
> Thanks in advance for any help with this. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/d157e316-854f-481b-9cd4-ed821d321cbc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] merging in-module default data with role-profile parameters

2017-07-17 Thread Luke Bigum
On Friday, 14 July 2017 17:17:03 UTC+1, R.I. Pienaar wrote:
>
>
> I have not really found a elegant solution, and I think the right way is 
> to stick this stuff in hiera on the mcollective::server_config key 
> rather than try and set it via the params. 
>
> You're not doing anything programatic about this data, so why not put it 
> in hiera? 
>

 
The "why" is mostly to do with what I consider to be "data" and what I 
consider to be "design", influenced a little bit by coding style and the 
use/abuse of Hiera in our environment.  I can boil it down to three main 
points:

Design in one place - if I'm writing Profiles that use tech/component 
modules, I prefer all the design/business logic in one place (in a profile) 
rather than have half the params in Puppet code and half of it in Hiera. 
 Profile parameters that are actually data are in Hiera.

"Staticness" / Attempt to reduce entropy - hard coding component class 
parameters makes it harder if not impossible for a value to be overridden 
in Hiera, and thus make machines different / introduce entropy. In a 
perfect world this probably wouldn't happen, but all too often I find 
examples of an engineer fixing one specific problem by setting one specific 
Hiera key for a node, not knowing that they've just made that machine 
behave differently to it's 19 other sibling machines that are supposed to 
be exactly the same. Discipline and code review also helps stop this from 
happening.

Testing - If I care a *very* great deal about a certain parameter's value, 
I can write this data value in Puppet code and RSpec Unit test / Beaker 
acceptance test that Profile.  If that value came from Hiera, I'd have to 
begin testing my Hiera data, and since the top level of my Hierarcy is 
"Node", doesn't that mean I'd have to run the same test for each of my 20 
nodes that are supposed to be the same...?  With the value hardcoded in 
Puppet, I've got one place to test the value. I get a certain level of 
assurance that it can't change, and a certain degree of confidence that the 
machines that profile applies to are "the same".

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/bb3b78bf-fe5c-4340-a875-0ec15186ddb8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] merging in-module default data with role-profile parameters

2017-07-14 Thread Luke Bigum
Hello,

I've come across an issue with how I want to write profiles vs how a module 
chooses to structure their default data.  As an example, 
the choria-io/puppet-mcollective module uses hashes of in-module data for 
each configuration file (which is quite elegant, reduces the amount of 
templates needed).  My issue with it is how I want to explicitly define 
some parameters for a profile I'm writing.  If I set a few keys of the 
'server_config' hash I end up overwriting the rest of the defaults in the 
module data, because an explicitly defined class param trumps the entire 
in-module data hash:

**
  class { '::mcollective':
server=> true,
client=> true,
server_config => {
  rpcauditprovider  => 'choria',
  'plugin.rpcaudit.logfile' => '/var/log/puppetlabs/choria-audit.log',
},
  }
**

I've got a solution by calling lookup() to get the original data structure, 
then doing a hash merge in Puppet code:

**
  $default_data = lookup('mcollective::server_config')
  $my_data = {
rpcauditprovider  => 'choria',
'plugin.rpcaudit.logfile' => '/var/log/puppetlabs/choria-audit.log',
  }
  class { '::mcollective':
server=> true,
client=> true,
server_config => $default_data + $my_data,
  }
**

Would anyone consider that a dumb approach? Are there better ways?

Thanks,

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/72d9a6f3-0133-4634-bafc-5c84e0109328%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: puppet-archive

2017-04-03 Thread Luke Bigum
>From the docs at https://github.com/voxpupuli/puppet-archive#reference 
(Resources -> Archive section):

  creates: if file/directory exists, will not download/extract archive.

If you've got a creates parameter with the value of ${windir} (which for 
you is C:\temp\) then the archive resource won't do anything if C:\temp\ 
exists. On a Windows machine I believe there's a very good chance that will 
always exist, hence why your archive is not doing anything.

The creates flag is there to stop the archive resource from running over 
and over again; it specifies a file that will exist AFTER you've downloaded 
and extracted your archive.  Think of it as a conditional to do work or not 
do work.

I've downloaded /te_agent_8.4.2_en_windows_x86_64.zip and looked inside. I 
would suggest you try this parameter:

  creates => "${windir}/te_agent_8.4.2_en_windows_x86_64/te_agent.msi"

-Luke


On Monday, 3 April 2017 16:53:44 UTC+1, Ryan Vande wrote:
>
> Can you explain further?
>
> As of now, if I dont manually create 
> ${windir}/te_agent_8.4.2_en_windows_x86_64 which is actually 
> c:\temp\te_agent_8.4.2_en_windows_x86_64 it blows up but creates never 
> creates it otherwise
>
> If I manual create the directories, the puppet cycle goes without error 
> but the zip is never created/extracted to 
> c:\temp\te_agent_8.4.2_en_windows_x86_64
>
>
>
> On Monday, April 3, 2017 at 9:53:46 AM UTC-5, Luke Bigum wrote:
>>
>> Actually no, it's going to need to be some file that's inside the ZIP 
>> archive, not the name of the ZIP archive itself. You get the idea though.
>>
>>
>> On Monday, 3 April 2017 15:49:59 UTC+1, Luke Bigum wrote:
>>>
>>>
>>> creates   => $windir,
>>>
>>>
>>> ^^^  I'm fairly certain that this resource won't run if that file 
>>> exists, which is most likely a directory (and does exist). I'd say it has 
>>> to be this:
>>>
>>>
>>>   creates   => "${windir}/te_agent_8.4.2_en_windows_x86_64.zip"
>>>
>>>
>>>
>>>
>>> On Monday, 3 April 2017 15:42:56 UTC+1, Ryan Vande wrote:
>>>>
>>>> I posted this in slack puppet community, lets see if I can get more 
>>>> ideas here 
>>>>
>>>> I have the following setup
>>>>
>>>> when puppet runs on the agent puppet node, no errors happen but nothing 
>>>> else happens either, have a look please 
>>>>
>>>> Im using Puppet Archive for the following 
>>>>
>>>> Puppetfile:
>>>> mod 'puppet-archive', '1.3.0'
>>>> mod 'puppetlabs-stdlib', '4.16.0'
>>>>
>>>>
>>>> hieradata/global.yaml:
>>>> artifactory_host: artifactory.azcender.com
>>>> tripwire::wintripdir: 'c://temp'
>>>>
>>>>
>>>> Profile:
>>>>
>>>> include ::archive
>>>> archive {"${windir}/te_agent_8.4.2_en_windows_x86_64.zip" :
>>>>   ensure=> present,
>>>>   source=> 
>>>> "http://${artifactory_host}/artifactory/application-release-local/gov/usda/fs/busops/cio/Tripwire/te_agent_8.4.2_en_windows_x86_64.zip";,
>>>>   extract   => true,
>>>>   extract_path  => $windir,
>>>>   creates   => $windir,
>>>>   cleanup   => false,
>>>>
>>>>
>>>> Puppet agent runs on the puppet node without error but nothing happens 
>>>> , meaning no files are uploaded and extracted to the node 
>>>> Any assistance is much appreciated 
>>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/b91b8640-dd76-4ebb-9dc2-9888098aee34%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: puppet-archive

2017-04-03 Thread Luke Bigum
Actually no, it's going to need to be some file that's inside the ZIP 
archive, not the name of the ZIP archive itself. You get the idea though.


On Monday, 3 April 2017 15:49:59 UTC+1, Luke Bigum wrote:
>
>
> creates   => $windir,
>
>
> ^^^  I'm fairly certain that this resource won't run if that file exists, 
> which is most likely a directory (and does exist). I'd say it has to be 
> this:
>
>
>   creates   => "${windir}/te_agent_8.4.2_en_windows_x86_64.zip"
>
>
>
>
> On Monday, 3 April 2017 15:42:56 UTC+1, Ryan Vande wrote:
>>
>> I posted this in slack puppet community, lets see if I can get more ideas 
>> here 
>>
>> I have the following setup
>>
>> when puppet runs on the agent puppet node, no errors happen but nothing 
>> else happens either, have a look please 
>>
>> Im using Puppet Archive for the following 
>>
>> Puppetfile:
>> mod 'puppet-archive', '1.3.0'
>> mod 'puppetlabs-stdlib', '4.16.0'
>>
>>
>> hieradata/global.yaml:
>> artifactory_host: artifactory.azcender.com
>> tripwire::wintripdir: 'c://temp'
>>
>>
>> Profile:
>>
>> include ::archive
>> archive {"${windir}/te_agent_8.4.2_en_windows_x86_64.zip" :
>>   ensure=> present,
>>   source=> 
>> "http://${artifactory_host}/artifactory/application-release-local/gov/usda/fs/busops/cio/Tripwire/te_agent_8.4.2_en_windows_x86_64.zip";,
>>   extract   => true,
>>   extract_path  => $windir,
>>   creates   => $windir,
>>   cleanup   => false,
>>
>>
>> Puppet agent runs on the puppet node without error but nothing happens , 
>> meaning no files are uploaded and extracted to the node 
>> Any assistance is much appreciated 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/27c3ab11-b78c-4c1b-bdcb-0af6317dc0f4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: puppet-archive

2017-04-03 Thread Luke Bigum

creates   => $windir,


^^^  I'm fairly certain that this resource won't run if that file exists, 
which is most likely a directory (and does exist). I'd say it has to be 
this:


  creates   => "${windir}/te_agent_8.4.2_en_windows_x86_64.zip"




On Monday, 3 April 2017 15:42:56 UTC+1, Ryan Vande wrote:
>
> I posted this in slack puppet community, lets see if I can get more ideas 
> here 
>
> I have the following setup
>
> when puppet runs on the agent puppet node, no errors happen but nothing 
> else happens either, have a look please 
>
> Im using Puppet Archive for the following 
>
> Puppetfile:
> mod 'puppet-archive', '1.3.0'
> mod 'puppetlabs-stdlib', '4.16.0'
>
>
> hieradata/global.yaml:
> artifactory_host: artifactory.azcender.com
> tripwire::wintripdir: 'c://temp'
>
>
> Profile:
>
> include ::archive
> archive {"${windir}/te_agent_8.4.2_en_windows_x86_64.zip" :
>   ensure=> present,
>   source=> 
> "http://${artifactory_host}/artifactory/application-release-local/gov/usda/fs/busops/cio/Tripwire/te_agent_8.4.2_en_windows_x86_64.zip";,
>   extract   => true,
>   extract_path  => $windir,
>   creates   => $windir,
>   cleanup   => false,
>
>
> Puppet agent runs on the puppet node without error but nothing happens , 
> meaning no files are uploaded and extracted to the node 
> Any assistance is much appreciated 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/244e9090-31a3-40ab-9a19-59edc77d9b31%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Custom Facts using awk

2017-03-30 Thread Luke Bigum
On Thursday, 30 March 2017 16:11:35 UTC+1, Warron French wrote:
>
> Hi Luke, I have some questions for you.
>
> First, the link= 
> https://github.com/puppetlabs/puppetlabs-apache/blob/master/lib/facter/apache_version.rb
>  
> didn't have any reference to awk at all, was this the file you intended to 
> suggest?
>
>
Oh, I wasn't giving you Awk examples specifically, I was giving you one 
with a small amount of Ruby, and one with a bit more Ruby and some string 
manipulation in it. The use of Awk and piped shell commands in my Fact 
there is 100% pure laziness, it would be more "robust" to do all of the 
string manipulation in Ruby.
 

> Secondly, the link= 
> https://github.com/LMAX-Exchange/puppet-networking-example/blob/master/lib/facter/interface_ringbuffer.rb
>  
> did have a reference to awk; thank you.
> However, the syntax:
>   ethtool_g = %x{/sbin/ethtool -g #{int} 2>/dev/null | grep -P 
> '^(RX|TX):' | awk '{print $2}'}
>
> Looks like something other than just plain shell scripting, so can you 
> break this down for me just a little bit?
>
> I recognize what looks like a variable, called ethtool_g, and then it 
> continues with assignement based on %x{...}  where the "" is your 
> shell scripting.
>
> What is the *%x* a reference for/to?  Can I simply replace your variable 
> with one of my own, and your shell scripting between the curly braces with 
> my own shell scripting?
>
 
Correct, ethtool_g is a Ruby variable.

%x{} is one of the ways of executing something in a Shell and getting it's 
STDOUT, there are other ways, this post explains it well: 
 http://stackoverflow.com/questions/2232/calling-shell-commands-from-ruby

the #{int} is embedding a Ruby variable called 'int' defined earlier into 
the string.

Is that legal, and is this in the language of ruby (so I have a reference 
> point of where to go to look up more examples?
>

Yes, you can.

What I would recommend is copy one of those Facts to your homedir, then set 
an environment variable FACTERLIB=/home/$USERNAME, which sets an extra 
Facter search path to your homedir. If you then run "facter -p" you should 
see the new Fact in the list. Then you can edit your Fact to your heart's 
content, and Google every crash or error message you come up with ;-) Once 
it actually works you can add the Fact to a module and distribute it to 
servers.

-Luke
 

> Sorry for the load of questions.  Thank you for the information.
>
> --
> Warron French
>
>
> On Thu, Mar 30, 2017 at 11:03 AM, warron.french  > wrote:
>
>> Hey, thanks for the examples Luke!  I am looking at them now.
>>
>> --
>> Warron French
>>
>>
>> On Thu, Mar 30, 2017 at 8:31 AM, Luke Bigum > > wrote:
>>
>>> Puppet modules on Github are a good source. I've found a simple one:
>>>
>>>
>>> https://github.com/puppetlabs/puppetlabs-apache/blob/master/lib/facter/apache_version.rb
>>>
>>> And one of my own that's a little more complicated:
>>>
>>>
>>> https://github.com/LMAX-Exchange/puppet-networking-example/blob/master/lib/facter/interface_ringbuffer.rb
>>>
>>> -Luke
>>>
>>> On Thursday, 30 March 2017 13:10:35 UTC+1, Warron French wrote:
>>>>
>>>> Joshua, thanks for this feedback.  I don't really know ruby; can you 
>>>> offer some ideas of where I can find other Puppet Facts written in Ruby 
>>>> that don't look like my originally posted example?
>>>>
>>>> Thank you sir.
>>>>
>>>> --
>>>> Warron French
>>>>
>>>>
>>>> On Tue, Mar 28, 2017 at 10:51 AM, Joshua Schaeffer <
>>>> jschaef...@gmail.com> wrote:
>>>>
>>>>> External facts are a Puppet v4 feature only. You have to use Ruby to 
>>>>> create custom facts in Puppet v3.
>>>>>
>>>>> On Monday, March 27, 2017 at 3:54:00 PM UTC-6, Warron French wrote:
>>>>>>
>>>>>> OK, done, and done.  But it still isn't showing up.
>>>>>>
>>>>>> Is this potentially because I am using PE-3.8 as a component of Red 
>>>>>> Hat Satellite?
>>>>>>
>>>>>> --
>>>>>> Warron French
>>>>>>
>>>>>>
>>>>>> On Mon, Mar 27, 2017 at 5:28 PM, Peter Bukowinski  
>>>>>> wrote:
>>>>>>
>>>>>>>

Re: [Puppet Users] Custom Facts using awk

2017-03-30 Thread Luke Bigum
Puppet modules on Github are a good source. I've found a simple one:

https://github.com/puppetlabs/puppetlabs-apache/blob/master/lib/facter/apache_version.rb

And one of my own that's a little more complicated:

https://github.com/LMAX-Exchange/puppet-networking-example/blob/master/lib/facter/interface_ringbuffer.rb

-Luke

On Thursday, 30 March 2017 13:10:35 UTC+1, Warron French wrote:
>
> Joshua, thanks for this feedback.  I don't really know ruby; can you offer 
> some ideas of where I can find other Puppet Facts written in Ruby that 
> don't look like my originally posted example?
>
> Thank you sir.
>
> --
> Warron French
>
>
> On Tue, Mar 28, 2017 at 10:51 AM, Joshua Schaeffer  > wrote:
>
>> External facts are a Puppet v4 feature only. You have to use Ruby to 
>> create custom facts in Puppet v3.
>>
>> On Monday, March 27, 2017 at 3:54:00 PM UTC-6, Warron French wrote:
>>>
>>> OK, done, and done.  But it still isn't showing up.
>>>
>>> Is this potentially because I am using PE-3.8 as a component of Red Hat 
>>> Satellite?
>>>
>>> --
>>> Warron French
>>>
>>>
>>> On Mon, Mar 27, 2017 at 5:28 PM, Peter Bukowinski  
>>> wrote:
>>>
 Hi Warron,

 Puppet executes the script directly, so you need the shebang line and 
 you must ensure the file is executable.

 -- Peter

 On Mar 27, 2017, at 2:25 PM, warron.french  wrote:

 Peter, perhaps I misunderstood you; but, I thought I was supposed to be 
 able to use bash or sh scripting to generate facters of my own without the 
 use of Ruby.

 The link you provided refers to a python script example.  It also adds 
 a shebang line at the top of the script; do I need the shebang line, or 
 will Puppet simply execute the shell script with:

 sh scriptname.sh

 Thanks for the feedback,

 --
 Warron French


 On Wed, Mar 22, 2017 at 7:07 PM, Peter Bukowinski  
 wrote:

> Hi Warron,
>
> I'd consider using an external, executable fact to avoid ruby 
> altogether.
>
>   
> https://docs.puppet.com/facter/3.6/custom_facts.html#executable-facts-unix
>
> Basically, you can write a bash script (or use any language you want),
> drop it into '//facts.d/' on your puppet server,
> and it will be synced to all your nodes (assuming you use pluginsync).
>
> The only requirement for executable fact scripts is that they must
> return key/value pairs in the format 'key=value'. Multiple keys/values
> get their own line. In your case, you could do something like this:
>
> -
> #!/bin/bash
>
> key="qty_monitors_total"
> value=$(your parsing command for /var/log/Xorg.0.log here)
>
> echo "${key}=${value}"
> -
>
> Save the file as an executable script in the above mentioned path and
> it should be available on the next puppet run.
>
> On Wed, Mar 22, 2017 at 3:24 PM, warron.french  
> wrote:
> > Hello, I have finally learned how to write a Custom Fact; and 
> duplicated the
> > syntax several times over inside the same .rb file.
> >
> > I am using syntax that looks like the following:
> >
> > Facter.add('qty_monitors_total') do
> >   setcode  do
> >  Facter::Util::Resolution.exec('/bin/grep " connected"
> > /var/log/Xorg.0.log | cut -d\) -f2,3,4 | grep GPU |sort -u | wc -l')
> >   end
> > end
> >
> > I don't know of any other way to do this yet; but that's not my 
> concern yet.
> >
> > What I would like to know is how can I use an awk command within the
> > Facter::Util::Resolution.exec('.') line.
> >
> > I have a need to essentially reproduce the line above but drop   wc 
> -l and
> > add awk '{ print $2"_"$3"_on_"$1$4 }'
> >
> > I need the awk command to pretty much look like this; the problem is 
> awk
> > uses its own single quotes (') and it will break the ruby code.
> >
> > I am not a ruby developer; so if someone could either tell me:
> >
> > It's just not possible; or
> > do it this way
> >
> >
> > That would be greatly appreciated.  Thank you,
> >
> > --
> > Warron French
> >
> > --
> > You received this message because you are subscribed to the Google 
> Groups
> > "Puppet Users" group.
> > To unsubscribe from this group and stop receiving emails from it, 
> send an
> > email to puppet-users...@googlegroups.com.
> > To view this discussion on the web visit
> > 
> https://groups.google.com/d/msgid/puppet-users/CAJdJdQmZXQAd%2Bo%2Bnp-NHqxGHnXubf%2Bac-dP5FPoy4QYMEVuBuA%40mail.gmail.com
> .
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to 

[Puppet Users] Re: Using notify with concat module...

2017-01-24 Thread Luke Bigum


On Monday, 23 January 2017 20:55:44 UTC, Sean wrote:
>
> Hello,
>
> I was reading over several threads regarding the use of concat modules and 
> subscribe capabilities.  It seems everyone is subscribe from another 
> resource instead of notify with a concat resource.  My preference is to use 
> notify, as I think it makes the code read better for documentation 
> purposes.  One thread implied that subscribe and notify are interchangeable 
> as long as refreshonly=true.  Is that correct in the case of using notify 
> with concat?  Is it sufficient to use one notify statement inside the main 
> concat resource for a file, or do I need to notify from each 
> concat::fragment resource?  I am hoping someone can clear up a bit of 
> confusion I've developed reading through the threads.  
>

There should be no difference, the examples you've seen are probably 
written by someone with a mental model where 'subscribe' makes more sense, 
where you and I think 'notify' reads better.  The one time where it might 
get cumbersome is if you have one Concat resource that has to notify dozens 
of other resources, so the Notify parameter ends up being a large array of 
resources.  In that case the code might read better to put one Subscribe on 
each other resource, but that's personal preference.

I would not recommend you put your own requirements on concat::fragments, 
just let the Concat module sort out it's own dependencies. You can easily 
create loops, even through implicit relationships that aren't immediately 
obvious. For example here's two classes, one that manages the Gnome dconf 
file and one of my own that sets some of the settings I want, but I've 
decided I need dconf done first before the my_desktop class is finished:

*
$dconf_file = '/tmp/dconf'
class my_desktop {
  concat::fragment { 'setting1':
target => $dconf_file,
  }
  service { 'some_stuff': }
}

class dconf {
  concat { $dconf_file: }
}

include dconf
include my_desktop

Class[dconf] -> Class[my_desktop]
*

That doesn't work so well.



> For background, I'm using puppet to configure Gnome using dconf.  I've 
> written a simple Exec resource that runs dconf-update, refreshonly => true. 
>  A concat resource might manage a file that collects several Gnome options 
> that relate to each other, where each concat::fragment resource corresponds 
> to a single Gnome option...like a fragment for enabling the screensaver, 
> and another fragment for the idle-delay.  If the file is updated, 
> dconf-update needs to be run and should only be run once at the end of a 
> puppet run.
>
> Thanks for your thoughts.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/f2409ade-effa-4868-b4ce-0573466443a8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Role vs hiera

2016-10-26 Thread Luke Bigum
It may not be as difficult as you think, and, you can *just* use it to 
insert a fake Fact, you don't have to start actually classifying your node 
classes with it.

I supplied our ENC to the list a while ago, it's just a bit of Python that 
reads YAML:

https://groups.google.com/forum/#!searchin/puppet-users/luke$20bigum%7Csort:date/puppet-users/XWAcm152cyQ/P_rpi50XBAAJ

On Tuesday, 25 October 2016 20:09:15 UTC+1, Ugo Bellavance wrote:
>
> Hi,
>
> I was actually wondering if it could be done without an ENC as we don't 
> have one for now.
>
> Thanks a lot for your input though.
>
> Ugo
>
> On Tuesday, October 18, 2016 at 3:50:37 PM UTC-4, Matt Zagrabelny wrote:
>>
>> On Tue, Oct 18, 2016 at 1:34 PM, Ugo Bellavance  wrote: 
>> > Hi, 
>> > 
>> > I've seen tutorials where they add the role as a fact in an client and 
>> then 
>> > can use the role for hiera data. Is there a better way to do so (ie 
>> without 
>> > having to configure anything on the client)? 
>>
>> As a matter of fact there is a better way. 
>>
>> If you use an ENC, then you can return the role as a top scope 
>> variable and your hiera configs can leverage those top scope 
>> variables. 
>>
>> Here is an example where I've scrubbed any of our site data: 
>>
>> # puppet-enc ldap.example.com 
>> --- 
>> classes: 
>>   role::directory_server: null 
>> environment: production 
>> parameters: 
>>   context: production 
>>   role: role::directory_server 
>>
>> The "classes" at the top and its "role" are for the classifying of the 
>> ENC, but the "context" and "role" in the  "parameters" near the bottom 
>> are variables that get exposed - hiera is one of the things that can 
>> use those variables. 
>>
>> This works super slick for us. 
>>
>> For what it is worth, we also use a notion of context that allows our 
>> ENC to describe whether a node is a "testing" or "production" type 
>> system - we have hiera lookups based on that data, too. 
>>
>> Let me know if you want the hiera configs. 
>>
>> -m 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/cf74c37d-1b97-4326-9766-a10cf7e54f43%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: Using a module that is not 100% hiera-compliant

2016-10-19 Thread Luke Bigum
Hello,

You are describing a problem we run into every now and then. Your default 
profile is what we call "mandatory" here, and then you have an edge case 
where 99% of your servers have Postfix the same way, and a couple have it a 
different way. Unfortunately that 99% means Postfix is not mandatory and so 
can't live in your default profile.

If you use smart class inheritance to structure your roles you should be 
able to remove most of the places you need to include postfix. Something 
like this:

class role::base {
  include profile::mandatory
  include profile::somethingelse
  include postifx
}

class role::anotherserver inherits role::base {
  include profile::anotherprofile
}

class role::postfixrelay {
  include profile::mandatory
  include profile::postfixrelay
}



Or another way would be move the majority of your postfix business logic 
out of Hiera (which as you describe is not working for you) and handle it 
in a profile. The below code introduces a simple Enum on your profile to 
control what "type" of postfix you want:

class profile::mail(
  $type = 'normal',
) {
  if ($type == 'normal') {
class { 'postfix':
... normal stuff ...
 }
  }
  elsif ($type == 'relay') {
class { 'postfix':
... relay stuff ...
  } else {
fail("Type '$type' is not supported")
  }
}

$ cat /etc/puppet/hiera/networks/192.168.155.0.yaml
...
profile::mail::type: 'relay'



I personally would prefer the second option. It enforces the same postfix 
config on almost all your servers (looking at your Hiera hierarchy there 
are plenty of levels to make your servers' Postfix "different" from each 
other). It's also easy to test with rspec.


-Luke


On Tuesday, 18 October 2016 18:56:59 UTC+1, Ugo Bellavance wrote:
>
> Hi,
>
> I am using camptocamp/postfix for my postfix configuration.  I originally 
> defined all my configs manifests but now I would like to change to using 
> hiera.  Unfortunately, this module doesn't support hiera for some of the 
> configs, so I must define many parameters in the manifests.  I wanted to 
> use hiera for simplicity, but also because I have a very nice use case:  I 
> have one SMTP front-end with its own specific configs (anti-spam/virus), 
> and a series of regular hosts. Traditionally, all hosts that are in the 
> same subnet as the Exchange server would use it as relayhost and all the 
> other hosts use the smtp front-ends.  Therefore, here's what I did:
>
> hiera.yaml:
>
> ---
> :backends:
> #  - regex
>   - yaml
> :yaml:
>   :datadir: /etc/puppet/hiera
> #:regex:
> #  :datadir: /var/lib/hiera
> :hierarchy:
>   - "host/%{fqdn}"
>   - "domain/%{domain}"
>   - "env/%{::environment}"
>   - "os/%{operatingsystem}"
>   - "osfamily/%{osfamily}"
>   - "networks/%{network_ens192}"
>   - "virtual/%{::virtual}"
>   - common
>
> This way, I define the exchange server as relayhost for the exchange 
> network in /etc/puppet/hiera/networks/192.168.155.0.yaml, and set the smtp 
> frontend as relayhost in /etc/puppet/hiera/common.yaml.
>
> However, since I can't put all the settings in hiera, I must put some in 
> the class declaration for the smtp frontends.  When I declare the postfix 
> class in both my default profile and in the smtp frontend profile, I get an 
> error saying that the class cannot be declared twice (Class[Postfix] is 
> already declared; cannot redeclare at 
> /etc/puppet/manifests/nodes/smtp_postfix_servers.pp:19)
>
> Another solution would be to declare the profile in all my roles, but it's 
> far from perfect.
>
> Is there a simple solution?
>
> I guess that I could do an if based on ipaddress in my default profile, 
> but I wanted to use hiera as much as possible. Yes I created an issue to 
> ask for full hiera support.
>
> Thanks,
>
> Ugo
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/0e4f3884-dffe-46cb-ba31-cdf003df044d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: simple node classification and custom facts

2016-09-06 Thread Luke Bigum
Hi, 

This is mostly a "rethink what you are doing" reply, based on my experience 
of starting with our business logic of an estate almost entirely coded in 
Hiera, and now moving towards a role/profile design.  If it doesn't fit, 
feel free to tell ignore, I've answered your question at the end :-)


On Tuesday, 6 September 2016 12:03:33 UTC+1, Berkeley wrote:
>
> I'm doing a refactor of my puppet code with the profiles+roles design 
> pattern.  I'm encountering what should be a simple problem, but I'm having 
> trouble finding an answer.
>
> With roles/profiles, you instantiate classes using 'include' and fetch the 
> parameter values from hiera.
>

Not necessarily, the design pattern is open to interpretation / what works 
for you. I have a certain strict view of the pattern and I personally would 
say the exact opposite of what you've said above. I would never allow Hiera 
data to control a role, and I would only allow Hiera data to control the 
functionality of a profile based on the design of my Hiera tree.
 

> Then, for each node, you specify one role, which in turn includes all the 
> relevant classes. 
>

Relevant "profiles" (which are still classes, yes, but making the point). 
Roles would include relevant profiles, but I wouldn't limit myself to just 
the "include" statement. I would not put component modules or resources 
into roles. If profiles are functionally flexible enough I might use the 
resource style declaration. For example, the profile::mail class below 
takes an Enum type that controls whether it's a simple internal mail server 
that you could use on your web server, or a fully fledged external mail 
relay with all the bells and whistles. In such a design my Postfix code is 
de-duplicated (to a point) in profile::mail, and I provide an interface 
where the operator can change the postfix mode to one of several pre-set 
methods:

class role::webserver {
  class { 'profile::mail':
type => 'internal',
  }
}
class role::mailserver {
  class { 'profile::mail':
type => 'relay',
  }
}

 

> Right now I have a hiera hierarchy that references the node's OS, 
> environment and host name.
>

It's probably a bit late to change now, but I cannot think of any reason 
why you'd need to put OS in as a Hiera level. A well written component 
class (postfix, mysql, apache, etc) should be operating system agnostic, so 
you should never have to do anything different yourself between different 
OS (in theory). In practice there are probably some edge cases, however the 
introduction of a role/profile design means you can put if/case statements 
in your profiles to handle different operating systems, so you still don't 
need OS in your Hiera tree:

class profile::mail {
if ($::operatingsystem == 'RedHat') {
   $version = 'latest'
} else {
$version = '1.1.1'
package { 'postfix':
   ensure => $version,
   }
}

If you can remove OS, I would recommend you do that eventually. I imagine 
it might take a while.

Hiera should be for business level / design stuff, I would never put a 
standard Facter Fact in a Hiera tree. I have three reasons why. First, It's 
a key value store, and that's it. It can merge Hashes and Arrays, and do 
recursive lookups, not much else. When things expand and get more 
complicated, relying too much on Hiera means you end up doing more and more 
"tricks" to get things to work. Conversely if you have a simpler Hiera tree 
and more of your logic in Puppet code (where you have if statements, 
selectors, and all new Map functions in Puppet 4) you get a lot more 
control. With a Puppet profile, I could request 4 differerent Hiera keys, 
take the first two characters of each, sum them together into a Hex number 
and write that to disk. Try do something complicated like that in Hiera.

Second reason has to do with entropy. If you make everything ultra 
configurable with Hiera, you're making it easy for servers to be configured 
differently. If you have several staff with different thought patterns and 
ways of solving problems, then very soon you'll see drift appear in your 
Hiera tree. Someone will do it one way, someone will do it another way, 
servers will be different. If the code that configures your servers is a 
lot more static (in Puppet code) then I would say it's easier to audit, and 
so I could assert to my boss something like "Postfix is only ever 
configured in one of two ways". If I relied on a complicated set of Hiera 
keys to control Postfix, I'm not sure I could make that assertion. There's 
nothing stopping some fool overriding the version in Hiera on the node 
level for our mail server if I allow it to happen. If it is a hard coded 
parameter, they can't do that. However, if you have a small estate and only 
one Puppet guy, you can probably keep on top of this.

Third reason is I find it a lot easier to test. The more a piece of Puppet 
code relies on external Hiera data to work properly, the trickier it is to 
test in all circumstances.

[Puppet Users] Re: How to handle predictable network interface names

2016-08-31 Thread Luke Bigum

On Saturday, 27 August 2016 18:51:09 UTC+1, Marc Haber wrote:
>
> On Fri, Aug 26, 2016 at 08:40:49AM -0700, Luke Bigum wrote: 
> > My Dell XPS 13, 2016 model: 
> > 
> >  /sys/class/net/docker0 
> > /sys/class/net/enp0s20u1u3i5 
> > E: ID_NET_NAME_MAC=enx9cebe824ebee 
> > E: ID_NET_NAME_PATH=enp0s20u1u3i5 
>
> What a name! 
>

http://accessories.euro.dell.com/sna/productdetail.aspx?c=uk&l=en&s=bsd&cs=ukbsdt1&sku=452-bboo
 

> > For both the Dell R720 and R730, there's no NET_NAME stuff: 
> > 
> > [root@r720 ~]# udevadm info -q all -p /sys/class/net/p4p2 
> > P: /devices/pci:40/:40:02.0/:42:00.1/net/p4p2 
> > E: UDEV_LOG=3 
> > E: DEVPATH=/devices/pci:40/:40:02.0/:42:00.1/net/p4p2 
> > E: INTERFACE=p4p2 
> > E: IFINDEX=7 
> > E: SUBSYSTEM=net 
>
> Maybe OS too old? The interface name "p4p2" also looks fishy. 
>

Nope, CentOS 6 (which I guess is pretty old now).
 

> > Yes, we definitely don't define resources, and don't include component / 
> > base level classes.  I think we pulled it from an early Gary Larizza 
> post, 
> > along with "roles don't have parameters, if you need to configure a role 
> > you've got two different roles". 
>
> Yes, but dropping a supplementary file does not mean that a role has 
> parameters. And also, it would be duplication if one had two distinct 
> roles that would only differ in single setting? 
>

Mmmm that's one of the two big questions in all our Puppet design 
discussions here. Do I introduce some sort of boolean switch to drop a new 
file,
or do I create a new role / profile. (The other big question is "Should 
this value be hard coded in a profile or as a parameter in Hiera?")

If it is just one file difference, then I would be persuaded to allow a 
boolean switch in the profile. If the file did functionality differences
inside a given location/environment, or or was necessary in this location 
but not others, I'd be ok with the boolean being set in Hiera for those 
locations/environments.

If on the other hand it the role needed to be used in two different ways 
inside an environment (maybe the file was the difference between Master and 
Slave) then I would
advocate two roles. In my mind that's quite clear that there are two 
different "jobs to do" here, rather than something being done slightly 
differently in a different location:

class  role::master {
   class { 'profile::something': master => true }
}
class  role::slave {
   class { 'profile::something': master => false }
}

The problem I've found with such boolean switches is once someone sees one 
of them as a potential solution, they tend to explode into a mass of if 
statements and 
make things really complicated to read. I think it's because it's easy - I 
can just solve my problem right now with an if statement here, rather than 
spend a few hours
thinking about and refactoring all the related classes.

Getting a good balance between duplication (and testing) versus purist 
design is difficult, and I don't think there will ever be a right answer.

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/3ca61fb4-389d-4738-be23-d63b7a4b3d0d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: How to handle predictable network interface names

2016-08-26 Thread Luke Bigum


On Friday, 26 August 2016 10:57:25 UTC+1, Marc Haber wrote:
>
> On Thu, Aug 25, 2016 at 08:08:13AM -0700, Luke Bigum wrote: 
> > On Thursday, 25 August 2016 13:21:24 UTC+1, Marc Haber wrote: 
> > > On Wed, Aug 24, 2016 at 08:36:49AM -0700, Luke Bigum wrote: 
> > > > Here we have very strict control over our hardware and what 
> interface 
> > > goes 
> > > > where. We keep CentOS 6's naming scheme on Dell hardware, so p2p1 is 
> PCI 
> > > > slot 2, Port 1, and don't try rename it. 
> > > 
> > > Isn't CentOS 6 still using eth0, 1, 2, 3? How do you handle different 
> > > hardware having different slot numbers, or PCI bridges shifting bus 
> > > numbers? 
> > > 
> > 
> > I find this depends on the manufacturer. I've never come across a Dell 
> > server newer than an R510 that *doesn't* give you PCI based names. I 
> just 
> > checked an R510 and it does. All of our ancient HP gear (7 years, older 
> > than the R510s which is old) give the ethX names. Also random SuperMicro 
> > hardware gives ethX. I don't really know what's missing for the kernel / 
> > udev to name them so, but for us it doesn't really matter. 
>
> Can you run 
> $ for iface in /sys/class/net/*; do echo $iface; sudo udevadm info -q all 
> -p $iface | grep ID_NET_NAME; done 
> on some of your gear? I'd like to learn what different vendors deliver. 
>
>
My Dell XPS 13, 2016 model:

 /sys/class/net/docker0
/sys/class/net/enp0s20u1u3i5
E: ID_NET_NAME_MAC=enx9cebe824ebee
E: ID_NET_NAME_PATH=enp0s20u1u3i5
/sys/class/net/lo
/sys/class/net/virbr0
/sys/class/net/virbr0-nic
/sys/class/net/virbr1
/sys/class/net/virbr1-nic
/sys/class/net/virbr2
/sys/class/net/virbr2-nic
/sys/class/net/wlp2s0
E: ID_NET_NAME=wlp2s0
E: ID_NET_NAME_MAC=wlxacd1b8c05607
E: ID_NET_NAME_PATH=wlp2s0

For both the Dell R720 and R730, there's no NET_NAME stuff:

[root@r720 ~]# udevadm info -q all -p /sys/class/net/p4p2
P: /devices/pci:40/:40:02.0/:42:00.1/net/p4p2
E: UDEV_LOG=3
E: DEVPATH=/devices/pci:40/:40:02.0/:42:00.1/net/p4p2
E: INTERFACE=p4p2
E: IFINDEX=7
E: SUBSYSTEM=net

And this is an FC430, which is a blade-like chassis with internal PCI 
switches:

[root@FC430 ~]# udevadm info -q all -p /sys/class/net/p5p1/
P: 
/devices/pci:00/:00:03.0/:02:00.0/:03:01.0/:04:00.0/:05:0c.0/:08:00.0/net/p5p1
E: UDEV_LOG=3
E: 
DEVPATH=/devices/pci:00/:00:03.0/:02:00.0/:03:01.0/:04:00.0/:05:0c.0/:08:00.0/net/p5p1
E: INTERFACE=p5p1
E: IFINDEX=6
E: SUBSYSTEM=net


>  What I get from the abstraction above is being able to take our 
> >  profiles and re-use them in a completely different site on the other 
> >  side of the world, or in a staging / testing environment. So I don't 
> >  have the concept of "VLAN 123 in Production UK", I've just got "The 
> >  STORAGE network" which in Production UK happens to be vlan 123 
> >  (buried low down in Hiera, and only specified once once), but in Dev 
> >  it's 456, and over there it doesn't exist so we'll give it the same 
> >  vlan tag as the CLIENT network, etc... The physical-ness of the 
> >  network is abstracted from the concepts our software relies on. 
>
> Yes, that is a really nice concept with should have been considered 
> here years ago. Alas, people didn't. 
>

To be fair we didn't design it this way from the start, it's only in the 
last couple evolutions that abstraction appeared. What we did have from the 
start though was the concept that the same network segment in different 
environments would have the same IP address segments, so the DATABASE 
network over here is 1.15.7.0, and over there it's 1.40.7.0. The third 
octet for the same network segment at different sites is the same (and 
hopefully the same VLAN tag on switches, but not mandatory). It's easy to 
abstract the numbers into names from there. However there's no reason why 
we couldn't use the same abstraction idea for vastly different or public IP 
ranges, it would just require more Hiera glue.
 

> > > So you do create network interfaces in the profile and not in the 
> > > role? 
> > > 
> > 
> > We try to follow the design rule that "Roles only include Profiles". 
>
> ... "and don't define their own resources", you mean? 
>
> That's one of the aspects of the role-and-profiles approach that I 
> have never seen spelled out explicitly, but still honored by nearly 
> anybody, and I have not yet fully grokked the reasons for doing so. 
>

Yes, we definitely don't define resources, and don't include component / 
base level 

Re: [Puppet Users] How to use class in different place

2016-08-26 Thread Luke Bigum
On Friday, 26 August 2016 07:58:39 UTC+1, Martin Alfke wrote:
>
> Hi Henrik, 
> > On 26 Aug 2016, at 00:25, Henrik Lindberg  > wrote: 
> > 
> > 
> > The recommended approach is to always use 'include()' to include the 
> classes (you can include the same class any number of times). You then use 
> data binding (that is, automatic binding of class parameters to values) by 
> storing the parameters in hiera. 
>
> is include() still the recommended way? 
> Or should we start using contain()? 
>

Depends on if you want to ensure a class relationship or not. I would say 
containing classes in other modules is bad design, and soon you'll end up 
creating dependency loops. I would only ever contain a class that's in the 
same module, a common pattern would be:

class drupal {
  contain drupal::config
  contain drupal::install
  Class[drupal::install] -> Class[drupal::config]
}

https://docs.puppet.com/puppet/latest/reference/lang_containment.html

To comment on the original poster's problem, if Class[My_drupal_class] is 
creating Class[Apache] using the resource like syntax, then 
Class[My_drupal_class] is not designed very well. In practice Drupal does 
not depend on Apache. It's a set of PHP files and a MySQL database, and it 
may or may not have Apache serve it's PHP (it could be another web server). 
If Class[My_drupal_class] is intended to be used under Apache, then it 
should create it's own Apache::Vhost resource (assuming Puppetlabs' Apache 
module) and that's it. Then somewhere else in your manifest, you will 
instantiate Class[Apache] with all the settings you want. This way you 
could even run other Apache services on the same Drupal machine, or, move 
Drupal to any other Apache server. Here's a sketch of a role/profile 
approach I would use:

class role::mywebserver {
   class { 'apache':
  all my options...
   }
   contain profile::drupal
   contain profile::some_other_apache_service
   Class[apache] -> Class[profile::drupal]
}

class profile::drupal {
   class { 'my_drupal_class':
  option   => 'something',
  parameter => 'something else',
  woof => 'meow',
   }
   apache::vhost { 'my_drupal_vhost':
 listen   => 80,
 docroot=> '/opt/drupal',
 otherparams => 'I can't remember',
   }
}

In the above, profile::drupal is portable to any other role / node.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/196f3238-edef-46d3-8b09-07f0f3e9621d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] How to handle predictable network interface names

2016-08-25 Thread Luke Bigum
On Thursday, 25 August 2016 13:31:17 UTC+1, Marc Haber wrote:
>
> On Wed, Aug 24, 2016 at 09:03:16AM -0700, Luke Bigum wrote: 
> > The template will create udev rules from two sources. The first is 
> > @interfaces, which is the giant multi-level hash of network interfaces 
> that 
> > our old designs use. A VM might look like this in Hiera: 
> > 
> > networking::interfaces: 
> >   eth0: 
> > ipaddr: 1.1.1.1 
> > hwaddr: 52:54:00:11:22:33 
> > 
> > The second source of udev rules is also a Hash and also from Hiera, but 
> > rather than it be embedded in the giant hash of networking information, 
> it 
> > is there to compliment the newer role/profile approach where we don't 
> > specify MAC addresses. This is purely a cosmetic thing for VMs to make 
> our 
> > interface names look sensible. Here is a sanitised Hiera file for a VM 
> with 
> > the fictitious "database" profile: 
> > 
> > profile::database::subnet_INTERNAL_slaves: 
> >   - 'eth100' 
> > profile::database::subnet_CLIENT_slaves: 
> >   - 'eth214' 
> > networking::extra_udev_static_interface_names: 
> >   eth100: '52:54:00:11:22:33' 
> >   eth214: '52:54:00:44:55:66' 
>
> So the "database" machine wouldn't have an entry in 
> networking::interfaces at all, or could one define, for example, the 
> management interface in networking::interfaces and the database 
> interfaces in the machine-specific hiera tree? 
>

That's technically possible with our module, yes, although I personally 
don't want to mix the styles. It has to do with our Hiera hierarchy being 
mostly based on physical location and entire "instances" of our 
application, where what we're talking about here is functionality based. If 
we had a business rule where "every server in this data centre has 
management interface eth76" then yeah, that would match our Hiera hierarchy 
perfectly. We don't have those hard and fast rules though, we've got 
several management networks, with different levels of security applied, 
appropriate for different layers of our application. So our management 
networks are a function of defence in depth security design alongside our 
software, rather than a simple physical location or group of VMs. Since 
they're a function of design or "business logic", rather than location, our 
management networks are defined in profiles (on new systems) because it's 
only at the role/profile level do you know that a "database" server should 
have a certain type of management network.

In my current profiles though I started with the management interfaces 
inside the same software profiles. Turns out this was not the best idea as 
they are not directly related, and what our roles should really look like 
is this:

***
class role::database {
  include profile::mandatory#Everything mandatory 
on EL6
  include profile::authentication   #Authentication is not 
mandatory
  include profile::database  #The profile that does 
most of the work for our software
  class { 'profile::management':   #management network 
definition and dependent services (sshd, etc)
 type => 'database'   #but for a specific 
type of machine
  }
}
***

So management would be separate. This would allow me to do smarter ordering 
of Puppet classes for management services like SSH (and remove a little bit 
more Hiera glue).


Greetings 
> Marc 
>
> -- 
> - 
>
> Marc Haber | "I don't trust Computers. They | Mailadresse im 
> Header 
> Leimen, Germany|  lose things."Winona Ryder | Fon: *49 6224 
> 1600402 
> Nordisch by Nature |  How to make an American Quilt | Fax: *49 6224 
> 1600421 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/651d7a2d-8eac-49d6-aaac-8e03937dc7c4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: How to handle predictable network interface names

2016-08-25 Thread Luke Bigum


On Thursday, 25 August 2016 13:21:24 UTC+1, Marc Haber wrote:
>
> On Wed, Aug 24, 2016 at 08:36:49AM -0700, Luke Bigum wrote: 
> > Here we have very strict control over our hardware and what interface 
> goes 
> > where. We keep CentOS 6's naming scheme on Dell hardware, so p2p1 is PCI 
> > slot 2, Port 1, and don't try rename it. 
>
> Isn't CentOS 6 still using eth0, 1, 2, 3? How do you handle different 
> hardware having different slot numbers, or PCI bridges shifting bus 
> numbers? 
>

I find this depends on the manufacturer. I've never come across a Dell 
server newer than an R510 that *doesn't* give you PCI based names. I just 
checked an R510 and it does. All of our ancient HP gear (7 years, older 
than the R510s which is old) give the ethX names. Also random SuperMicro 
hardware gives ethX. I don't really know what's missing for the kernel / 
udev to name them so, but for us it doesn't really matter.

>  We have a 3rd party patch manager tool (patchmanager.com), LLDP on 
> >  our switches, and a Nagios check that tells me if an interface is not 
> >  plugged into the switch port it is supposed to be plugged into 
> >  (according to patchmanager). 
>
> Nice ;-) Is the code for the Nagios stuff public? 
>

Unfortunately no :-( Another one of those LMAX modules that's had years of 
development but too much company specific stuff hard coded in it to 
release. It's not a huge amount though, and I did just ask my Lead if I 
could clean up our networking module and release it and he was more than 
happy, I'm sure I could do the same for our nagios module. Watch this 
space, but don't hold your breath.


>  This works perfectly on Dell hardware because the PCI name mapping 
> >  works. 
>
> And you don't have many different kinds of servers. 


We try keep as few as possible, but it's not that small a list:

***
[root@puppet ~]# mco facts productname
Report for fact: productname

.found 1 times
KVM  found 603 times
OptiPlex 7010found 1 times
OptiPlex 7020found 2 times
PowerEdge FC430  found 15 times
PowerEdge FC630  found 56 times
PowerEdge R220   found 1 times
PowerEdge R320   found 92 times
PowerEdge R330   found 1 times
PowerEdge R510   found 17 times
PowerEdge R520   found 66 times
PowerEdge R720   found 36 times
PowerEdge R720xd found 30 times
PowerEdge R730   found 7 times
PowerEdge R730xd found 37 times
Precision Tower 5810 found 10 times
Precision WorkStation T5500  found 7 times
ProLiant DL360 G6found 2 times
ProLiant DL380 G5found 16 times
ProLiant DL380 G6found 11 times
To Be Filled By O.E.M.   found 1 times
X9SCL/X9SCM  found 6 times
*
 

 
>
>  On really old HP gear it doesn't work, 
>
> What does that mean? 
>
>
I meant that on our very old HP servers the PCI device name mapping doesn't 
come up, so you end up with eth0, eth1, etc.

 

> > We still need some sort of "glue record" that says "this interface 
> should 
> > be up and have this IP". In our older designs this was managed entirely 
> in 
> > Hiera - so there's a giant multi-level hash that we run 
> create_resources() 
> > over to define every single network interface. You can imagine the 
> amount 
> > of Hiera data we have. 
>
> That's what we're trying to avoid. Can you share example snippets? 
>


Here is a snippet of the older style, in a Node's Hiera. It is what I'm 
trying to move away from, because if you want to create 20 of these 
machines you've got to copy this Hiera hash around 20 times over. Oh the 
number of typos... You can probably interpolate the defined types that this 
data has create_resources() run over, the key names are pretty Red Hat 
specific:

***
networking::interfaces:
  bond1:
bonding_opts: mode=802.3ad xmit_hash_policy=layer3+4 lacp_rate=slow 
miimon=100
enable: true
onboot: 'yes'
type: Bonding
  bond1.3:
broadcast: 1.1.3.255
enable: true
ipaddr: 1.1.3.7
netmask: 255.255.255.0
network: 1.1.3.0
onb

Re: [Puppet Users] How to handle predictable network interface names

2016-08-24 Thread Luke Bigum
Now that I think about it, I might be able to post a sanitised version of 
the module online with most of the internal stuff stripped out. It might 
prove useful for educating our own staff in the concepts, as well as other 
people. It's not a 5 minute job though so if/when it's done, I'll write a 
new Group post instead of continuing to hijack this one :-)

On Wednesday, 24 August 2016 17:05:47 UTC+1, LinuxDan wrote:
>
> It is a starting point.
> Many thanks for sharing what you can.
>
> Dan White | d_e_...@icloud.com 
> 
> “Sometimes I think the surest sign that intelligent life exists elsewhere in 
> the universe is that none of it has tried to contact us.”  (Bill Waterson: 
> Calvin & Hobbes)
>
>
> On Aug 24, 2016, at 12:03 PM, Luke Bigum > 
> wrote:
>
> No, not really :-( It's a very "internal" module that I forked from 
> someone's Google Summer of Code project over 5 years ago (way before 
> voxpupuli/puppet-network). You know all those Hiera keys about vlan tags I 
> mentioned? The defaults are in this module and are the default VLAN 
> interfaces for all of our networks. if I gave out the module the Security 
> team would throttle me for releasing what is part of a map of internal 
> network architecture ;-)
>
> I can however, just post the bit that does the UDEV rules...
>
> *
> $ cat ../templates/etc/udev/rules.d/70-persistent-net.rules.erb
> # Managed by Puppet
>
> <% if @interfaces.is_a?(Hash) -%>
> <%   @interfaces.sort.each do |key,val| -%>
> <% if val['hwaddr'] -%>
> SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<%= 
> val['hwaddr'] -%>", ATTR{type}=="1", KERNEL=="eth*", NAME="<%= key -%>"
> <% end # if val['hwaddr'] -%>
> <%   end # @interfaces.sort.each -%>
> <% end -%>
> <% if @extra_udev_static_interface_names.is_a?(Hash) -%>
> <%   @extra_udev_static_interface_names.sort.each do |interface,hwaddr| -%>
> SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<%= 
> hwaddr.downcase -%>", ATTR{type}=="1", KERNEL=="eth*", NAME="<%= interface 
> -%>"
> <%   end -%>
> <% end -%>
> *
>
> The template will create udev rules from two sources. The first is 
> @interfaces, which is the giant multi-level hash of network interfaces that 
> our old designs use. A VM might look like this in Hiera:
>
> networking::interfaces:
>   eth0:
> ipaddr: 1.1.1.1
> hwaddr: 52:54:00:11:22:33
>
> The second source of udev rules is also a Hash and also from Hiera, but 
> rather than it be embedded in the giant hash of networking information, it 
> is there to compliment the newer role/profile approach where we don't 
> specify MAC addresses. This is purely a cosmetic thing for VMs to make our 
> interface names look sensible. Here is a sanitised Hiera file for a VM with 
> the fictitious "database" profile:
>
> profile::database::subnet_INTERNAL_slaves:
>   - 'eth100'
> profile::database::subnet_CLIENT_slaves:
>   - 'eth214'
> networking::extra_udev_static_interface_names:
>   eth100: '52:54:00:11:22:33'
>   eth214: '52:54:00:44:55:66'
>
>
>
>
>
> On Wednesday, 24 August 2016 16:41:28 UTC+1, LinuxDan wrote:
>>
>> Very nice, Luke.
>>
>> Does the code that lets you custom-name your interfaces live in github or 
>> puppet-forge anywhere ?
>>
>> If not, would you be willing to share ?  I can bring brownies and/or beer 
>> to the collaboration :)
>>
>> Dan White | d_e_...@icloud.com
>> 
>> “Sometimes I think the surest sign that intelligent life exists elsewhere in 
>> the universe is that none of it has tried to contact us.”  (Bill Waterson: 
>> Calvin & Hobbes)
>>
>>
>> On Aug 24, 2016, at 11:36 AM, Luke Bigum  wrote:
>>
>> Here we have very strict control over our hardware and what interface 
>> goes where. We keep CentOS 6's naming scheme on Dell hardware, so p2p1 is 
>> PCI slot 2, Port 1, and don't try rename it. We have a 3rd party patch 
>> manager tool (patchmanager.com), LLDP on our switches, and a Nagios 
>> check that tells me if an interface is not plugged into the switch port it 
>> is supposed to be plugged into (according to patchmanager). This works 
>> perfectly on Dell hardware because the PCI name ma

Re: [Puppet Users] How to handle predictable network interface names

2016-08-24 Thread Luke Bigum
No, not really :-( It's a very "internal" module that I forked from 
someone's Google Summer of Code project over 5 years ago (way before 
voxpupuli/puppet-network). You know all those Hiera keys about vlan tags I 
mentioned? The defaults are in this module and are the default VLAN 
interfaces for all of our networks. if I gave out the module the Security 
team would throttle me for releasing what is part of a map of internal 
network architecture ;-)

I can however, just post the bit that does the UDEV rules...

*
$ cat ../templates/etc/udev/rules.d/70-persistent-net.rules.erb
# Managed by Puppet

<% if @interfaces.is_a?(Hash) -%>
<%   @interfaces.sort.each do |key,val| -%>
<% if val['hwaddr'] -%>
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<%= 
val['hwaddr'] -%>", ATTR{type}=="1", KERNEL=="eth*", NAME="<%= key -%>"
<% end # if val['hwaddr'] -%>
<%   end # @interfaces.sort.each -%>
<% end -%>
<% if @extra_udev_static_interface_names.is_a?(Hash) -%>
<%   @extra_udev_static_interface_names.sort.each do |interface,hwaddr| -%>
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="<%= 
hwaddr.downcase -%>", ATTR{type}=="1", KERNEL=="eth*", NAME="<%= interface 
-%>"
<%   end -%>
<% end -%>
*

The template will create udev rules from two sources. The first is 
@interfaces, which is the giant multi-level hash of network interfaces that 
our old designs use. A VM might look like this in Hiera:

networking::interfaces:
  eth0:
ipaddr: 1.1.1.1
hwaddr: 52:54:00:11:22:33

The second source of udev rules is also a Hash and also from Hiera, but 
rather than it be embedded in the giant hash of networking information, it 
is there to compliment the newer role/profile approach where we don't 
specify MAC addresses. This is purely a cosmetic thing for VMs to make our 
interface names look sensible. Here is a sanitised Hiera file for a VM with 
the fictitious "database" profile:

profile::database::subnet_INTERNAL_slaves:
  - 'eth100'
profile::database::subnet_CLIENT_slaves:
  - 'eth214'
networking::extra_udev_static_interface_names:
  eth100: '52:54:00:11:22:33'
  eth214: '52:54:00:44:55:66'




On Wednesday, 24 August 2016 16:41:28 UTC+1, LinuxDan wrote:
>
> Very nice, Luke.
>
> Does the code that lets you custom-name your interfaces live in github or 
> puppet-forge anywhere ?
>
> If not, would you be willing to share ?  I can bring brownies and/or beer 
> to the collaboration :)
>
> Dan White | d_e_...@icloud.com 
> 
> “Sometimes I think the surest sign that intelligent life exists elsewhere in 
> the universe is that none of it has tried to contact us.”  (Bill Waterson: 
> Calvin & Hobbes)
>
>
> On Aug 24, 2016, at 11:36 AM, Luke Bigum > 
> wrote:
>
> Here we have very strict control over our hardware and what interface goes 
> where. We keep CentOS 6's naming scheme on Dell hardware, so p2p1 is PCI 
> slot 2, Port 1, and don't try rename it. We have a 3rd party patch manager 
> tool (patchmanager.com), LLDP on our switches, and a Nagios check that 
> tells me if an interface is not plugged into the switch port it is supposed 
> to be plugged into (according to patchmanager). This works perfectly on 
> Dell hardware because the PCI name mapping works. On really old HP gear it 
> doesn't work, so we fall back on always assuming eth0 is the first onboard 
> port, etc. If the kernel scanned these devices in a different order we'd 
> get the same breakage you describe, but that's never happened on it's own, 
> it's only happened if an engineer has gone and added re-arranged cards.
>
> We still need some sort of "glue record" that says "this interface should 
> be up and have this IP". In our older designs this was managed entirely in 
> Hiera - so there's a giant multi-level hash that we run create_resources() 
> over to define every single network interface. You can imagine the amount 
> of Hiera data we have. In the newer designs which are a lot more of a 
> role/profile approach I've been trying to conceptualise the networking 
> based on our profiles. So if one of our servers is fulfilling function 
> "database" there will be a Class[profile::database]. This Class might 
> create a bonded interface for the "STORAGE" network and another interface 
> for the "CLIENT" network. Through various levels of Hiera I can define the 
> STORAGE netwo

[Puppet Users] Re: How to handle predictable network interface names

2016-08-24 Thread Luke Bigum
Here we have very strict control over our hardware and what interface goes 
where. We keep CentOS 6's naming scheme on Dell hardware, so p2p1 is PCI 
slot 2, Port 1, and don't try rename it. We have a 3rd party patch manager 
tool (patchmanager.com), LLDP on our switches, and a Nagios check that 
tells me if an interface is not plugged into the switch port it is supposed 
to be plugged into (according to patchmanager). This works perfectly on 
Dell hardware because the PCI name mapping works. On really old HP gear it 
doesn't work, so we fall back on always assuming eth0 is the first onboard 
port, etc. If the kernel scanned these devices in a different order we'd 
get the same breakage you describe, but that's never happened on it's own, 
it's only happened if an engineer has gone and added re-arranged cards.

We still need some sort of "glue record" that says "this interface should 
be up and have this IP". In our older designs this was managed entirely in 
Hiera - so there's a giant multi-level hash that we run create_resources() 
over to define every single network interface. You can imagine the amount 
of Hiera data we have. In the newer designs which are a lot more of a 
role/profile approach I've been trying to conceptualise the networking 
based on our profiles. So if one of our servers is fulfilling function 
"database" there will be a Class[profile::database]. This Class might 
create a bonded interface for the "STORAGE" network and another interface 
for the "CLIENT" network. Through various levels of Hiera I can define the 
STORAGE network as VLAN 100, because it might be a different vlan tag at a 
different location. Then at the Hiera node level (on each individual 
server) I will have something like:

profile::database::bond_storage_slaves: [ 'p2p1', 'p2p2' ]

That's the glue. At some point I need to tell Puppet that on this specific 
server, the storage network is a bond of p2p1 and p2p2. If I took that 
profile to a HP server, I'd be specifying a different set of interface 
names. In some situations I even just put in one bond interface member, 
which is useless, but in most situations I find less entropy is worth more 
than having a slightly more efficient networking stack.

I have bounced around the idea of removing this step and trusting the 
switch - ie: write a fact to do an LLDP query for the VLAN of the switch 
port each interface is connected to, that way you wouldn't need the glue, 
there'd be a fact called vlan_100_interfaces. Two problems with this 
approach: we end up trusting the switch to be our source of truth (it may 
not be correct, and, what if the switch port is down?). Secondly the 
quality and consistency of LLDP information you get out of various 
manufacturers of networking hardware is very different, so relying on LLDP 
information to define your OS network config is a bit risky for me.

It's a different story for our VMs. Since they are Puppet defined we 
specify a MAC address and so we "know" which MAC will be attached to which 
VM bridge. We drop a MAC based udev rule into the guest to name them 
similarly, ie: eth100 is on br100. I could technically use the same Puppet 
code to write udev rules for my hardware, but the PCI based naming scheme 
is fine so far.

That's what we do, but it's made easy by an almost homogeneous hardware 
platform and strict physical patch management.

When I read about your problem, it sounds like you are missing a "glue 
record" that describes your logical interfaces to your physical devices. If 
you were to follow something along the lines of our approach, you might 
have something like this:

class profile::some_firewall(
  $external_interface_name = 'eth0',
  $internal_interface_name = 'eth1',
  $perimiter_interface_name = 'eth2'
) {
  firewall { '001_allow_internal':
chain   => 'INPUT',
iniface => $internal_interface_name,
action  => 'accept',
proto => 'all',
  }

  firewall { '002_some_external_rule':
chain   => 'INPUT',
iniface => $external_interface_name,
action  => 'accept',
proto => 'tcp',
dport => '443',
  }
}

That very simple firewall profile probably already works on your HP 
hardware, and on your Dell hardware you'd need to override the 3 parameters 
in Hiera:

profile::some_firewall::internal_interface_name: 'em1'
profile::some_firewall::external_interface_name: 'p3p1'
profile::some_firewall::perimiter_interface_name: 'p1p1'

Hope that helps,

-Luke

On Wednesday, 24 August 2016 14:55:38 UTC+1, Marc Haber wrote:
>
> Hi, 
>
> I would like to discuss how to handle systemd's new feature of 
> predictable network interface names. This is a rather hot topic in the 
> team I'm currently working in, and I'd like to solicit your opinions 
> about that. 
>
> On systems with more than one interface, the canonical way to handle 
> this issue in the past was "assume that eth0 is connected to network 
> foo, eth1 is connected to network bar, and eth2 is connected to 
> network baz" and to accept t

[Puppet Users] Changing namevar of resources triggering alias error

2016-07-29 Thread Luke Bigum
Can someone explain this to me? I thought I'd be able to change the title of a 
nagios_host resource but leave name_var the same to effectively write two 
nagios_host files to disk with the same content, but instead I'm triggering an 
error in the resouce alias code. I didn't realise changing namevar was using 
resource aliases under the hood?

Any ideas on how I could make this work?


[root@localhost ~]# cat test.pp 
nagios_host { 'foo':
  ensure=> present,
  host_name => 'foo',
  address   => 'foo',
  target=> '/tmp/foo',
}

nagios_host { 'foo_again':
  ensure=> present,
  host_name => 'foo',
  address   => 'foo',
  target=> '/tmp/foo_again',
}
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# puppet apply test.pp
Notice: Compiled catalog for localhost in environment production in 0.10 seconds
Error: Cannot alias Nagios_host[foo_again] to ["foo"] at /root/test.pp:13; 
resource ["Nagios_host", "foo"] already declared at /root/test.pp:6



--
Luke Bigum
Senior Systems Engineer

Information Systems
---

LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN
http://www.LMAX.com/

Recognised by the most prestigious business and technology awards
 
2016 Best Trading & Execution, HFM US Technology Awards
2016, 2015, 2014, 2013 Best FX Trading Venue - ECN/MTF, WSL Institutional 
Trading Awards

2015 Winner, Deloitte UK Technology Fast 50
2015, 2014, 2013, One of the UK's fastest growing technology firms, The Sunday 
Times Tech Track 100
2015 Winner, Deloitte EMEA Technology Fast 500
2015, 2014, 2013 Best Margin Sector Platform, Profit & Loss Readers' Choice 
Awards

---

FX and CFDs are leveraged products that can result in losses exceeding your 
deposit. They are not suitable for everyone so please ensure you fully 
understand the risks involved.

This message and its attachments are confidential, may not be disclosed or used 
by any person other than the addressee and are intended only for the named 
recipient(s). This message is not intended for any recipient(s) who based on 
their nationality, place of business, domicile or for any other reason, is/are 
subject to local laws or regulations which prohibit the provision of such 
products and services. This message is subject to the following terms 
(http://lmax.com/pdf/general-disclaimers.pdf), if you cannot access these, 
please notify us by replying to this email and we will send you the terms. If 
you are not the intended recipient, please notify the sender immediately and 
delete any copies of this message.

LMAX Exchange is the trading name of LMAX Limited. LMAX Limited operates a 
multilateral trading facility. LMAX Limited is authorised and regulated by the 
Financial Conduct Authority (firm registration number 509778) and is a company 
registered in England and Wales (number 6505809).

LMAX Hong Kong Limited is a wholly-owned subsidiary of LMAX Limited. LMAX Hong 
Kong is licensed by the Securities and Futures Commission in Hong Kong to 
conduct Type 3 (leveraged foreign exchange trading) regulated activity with CE 
Number BDV088.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/1154755873.4737099.1469807604114.JavaMail.zimbra%40lmax.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Nodes getting catalog with incorrect resource.

2016-07-04 Thread Luke Bigum
Can you explain the symptoms of your problem a bit more, and link to the NRPE 
module you are using if it's open source?

You've described everything you've checked, but I still don't know exactly 
what's wrong / failing and what you are expecting to see :-)

Does the NRPE module collect exported resources? Could you be collecting Redhat 
resources onto your ubuntu machine?

There may be some level of catalog caching happening here, but to get just the 
operating system Facts confused and nothing else seems unlikely. You could try 
turn unlplug PuppetDB from your Master temporarily (storeconfigs = false?) and 
see what catalog gets compiled then. The Master should also have the latest 
Facts for every node on disk in YAML format here:

ls -ld /var/lib/puppet/yaml/facts/$(hostname).yaml

Assuming that's still the right path in Puppet 4.

--
Luke Bigum
Senior Systems Engineer

Information Systems

- Original Message -
From: "Gino Lisignoli" 
To: "puppet-users" 
Sent: Monday, 4 July, 2016 01:09:28
Subject: [Puppet Users] Nodes getting catalog with incorrect resource.

I'm having the strangest puppet problem I have ever seen:

I'm trying to get a nrpe module working on some nodes (Ubuntu 12.04 and 
14.04), but both modules I have tried, seem to think the operating system 
is Redhat.

To explain this a little bit better:

- Facter on all these nodes reports that the osfamily is Debian, and the 
operatingsystem is Ubuntu, which is correct.
- When I check the reported facts for the nodes on the puppetserver they 
are Debian and Ubuntu
- When I look at the catalog that gets compiled I can see that all my other 
resources seem to be correct, we have some osfamily/system code that checks 
the os and sets up apt/yum repositories. So I know that's working correct.
- When I r10k my entire environment to the nodes and run it locally with 
'puppet apply' there are no problems at all.
- Sometimes when I clear the environment cache, the problem will go away 
for the first puppet run, but all future puppet runs have the same problem.
- I've checked hiera, the only paramater for nrpe is to specify our ntp 
server

I'm running puppet 4.5.1, puppetsever 2.4.0 and puppetdb 4.1.0, we use 
Foreman to classify our nodes with no smart paramaters etc. The only 
classification it does is assign a role to a node.

My only ideas are:

- There's some some sort of catalog/resource cache that is getting 
generated with the wrong nrpe resource. AFAIK this doesn't exist? As only 
whole catalogs are cached on the puppetserver
- Somehow puppetdb might be invloved? I know puppetdb has 
resources/catalogs on it. Does puppetserver use puppetdb as a cache for 
resources/catalogs when generating a catalog for nodes? If so is there a 
way to debug why it would be generating a catalog with an incorrect 
resource for nodes?

Anyone have any ideas how to debug this?

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/889f44ad-1ece-4dff-a769-b91c916d4bee%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
---

LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN
http://www.LMAX.com/

Recognised by the most prestigious business and technology awards
 
2016 Best Trading & Execution, HFM US Technology Awards
2016, 2015, 2014, 2013 Best FX Trading Venue - ECN/MTF, WSL Institutional 
Trading Awards

2015 Winner, Deloitte UK Technology Fast 50
2015, 2014, 2013, One of the UK's fastest growing technology firms, The Sunday 
Times Tech Track 100
2015 Winner, Deloitte EMEA Technology Fast 500
2015, 2014, 2013 Best Margin Sector Platform, Profit & Loss Readers' Choice 
Awards

---

FX and CFDs are leveraged products that can result in losses exceeding your 
deposit. They are not suitable for everyone so please ensure you fully 
understand the risks involved.

This message and its attachments are confidential, may not be disclosed or used 
by any person other than the addressee and are intended only for the named 
recipient(s). This message is not intended for any recipient(s) who based on 
their nationality, place of business, domicile or for any other reason, is/are 
subject to local laws or regulations which prohibit the provision of such 
products and services. This message is subject to the following terms 
(http://lmax.com/pdf/general-disclaimers.pdf), if you cannot access these, 
please notify us by replying to this email and we will send you the terms. If 
you are not the intended recipient, please notify the sender immediately and 
delete any copies of this message.

LMAX Exchange is the trading name of LMAX Limited.

Re: [Puppet Users] Git Repo Strategy

2016-06-16 Thread Luke Bigum
Git doesn't work like that, the repo in it's entirety is at a certain commit / 
tag / version, so you can't do what I think you are asking.

You might find this useful. We went from monolithic to split modules over a 
year ago, here's the Bash that did it. You'll want to adjust certain things, 
like the module name prefix:


#! /bin/sh

MODULE=$1
BASE_DIR=$PWD
WORK_DIR="${BASE_DIR}/working"

CONTROL_REPO='lmax-controlrepo'
GIT_REPO_BASE='https://GITLABSERVER/GITLABGROUP/'

cat << EOF 
mkdir working
git clone ${GIT_REPO_BASE}/${CONTROL_REPO}.git ${WORK_DIR}/${CONTROL_REPO}
cd ${WORK_DIR}/${CONTROL_REPO}

git subtree split -P site/${MODULE} -b lmax-${MODULE}

cd ${WORK_DIR}
mkdir lmax-${MODULE}
cd lmax-${MODULE}
git init 
git pull ${WORK_DIR}/${CONTROL_REPO} lmax-${MODULE}

git remote add origin ${GIT_REPO_BASE}/lmax-${MODULE}.git
git push origin -u master

cd ${BASE_DIR}
EOF



--
Luke Bigum
Senior Systems Engineer

Information Systems

- Original Message -
From: "broncosd183" 
To: "puppet-users" 
Sent: Wednesday, 15 June, 2016 20:45:41
Subject: Re: [Puppet Users] Git Repo Strategy

EDIT: I've found this link by Gary which details how to change the 
basemodulepath for each environment.conf file to effectively read in a 
monolithic repo containing all of the desired modules in your puppetfile  ( 
http://garylarizza.com/blog/2014/03/07/puppet-workflow-part-3b/ ).  My 
modified question is once this has been implemented, is there any way to 
implement a more precise module control in the puppetfile i.e pass 
references or commit tags if the modules had been in individual repos?

On Wednesday, June 15, 2016 at 3:27:10 PM UTC-4, broncosd183 wrote:
>
> Hey all,
> I'm currently starting to implement the puppetfile format and have hit a 
> wall of sorts. We currently are stuck on that old monolithic repo of 
> modules and are eventually looking to move away from this sometime in the 
> near future. My question is, for now is there any way to make a puppetfile 
> for individual modules within this repo? We have hosted it on github and I 
> understand how to pass the url and references if the modules are in their 
> own repos. Can the same be done for modules in our monolithic repo? At the 
> very least we were hoping to make a puppetfile for the current repo 
> configuration and slowly transition out of it and update the puppetfile 
> accordingly. 
>
> Thanks!
>
> On Wednesday, June 15, 2016 at 11:35:33 AM UTC-4, Bret Wortman wrote:
>>
>> I made the conversion a little over a year ago and it's been a dream ever 
>> since. The Puppetfiles aren't that hard -- We store each module in its own 
>> repo and use branches to determine environments. For each new environment 
>> we want to use, we just branch the "puppet" repo which contains the 
>> Puppetfile and let it know which modules will be under test for this 
>> environment. It's a lot simpler than it sounds.
>>
>> On Wednesday, June 15, 2016 at 11:27:28 AM UTC-4, broncosd183 wrote:
>>>
>>> Awesome thanks for the feedback and options Rich and Christopher. I'm 
>>> outlining a plan of attack now and going to make a pass at installing R10k 
>>> and configuring it correctly. The main hurdle was the puppetfile and its 
>>> dependencies; however, that looks much more feasible now.
>>>
>>> On Friday, June 10, 2016 at 10:56:03 AM UTC-4, Rich Burroughs wrote:
>>>>
>>>> I'm assuming this could be done. We're talking about UNIX she'll 
>>>> commands and there's a way to do just about anything. But I can't imagine 
>>>> it being simple or fun to use. Like could you do Pull Requests on Github 
>>>> between these repos? Maybe, depending on how you set it up. People 
>>>> nowadays 
>>>> recommend against monolithic repos too, and that's what you'd have. You'd 
>>>> just have a bunch of them.
>>>>
>>>> The normal recommended workflow with r10k is using branches for those 
>>>> environments, not separate repos. Then you have the ability to merge 
>>>> between branches, so it's easy to promote those changes along your 
>>>> pipeline.
>>>>
>>>> I remember back before I started using r10k, it seemed very confusing 
>>>> to me. I think there's a bit more info out about it now. In terms of 
>>>> getting a Puppetfile setup, one of the hard things there is that you need 
>>>> to account for all of the dependencies. Rob Nelson made this cool Ruby gem 
>>>> that makes generating the file a bit easier. You can pass it a set of

Re: [Puppet Users] Multiple CA setup.

2016-06-08 Thread Luke Bigum
I think the dated docs you are reading are probably it :-)

Running very much on 6 year old memory here, when I tried it last... You create 
a new Puppet CA cert with multiple SANs on it for each of your Puppet Masters' 
hostnames, and distribute that to each Master. The agents can be signed by any 
Puppet Master, and will then be able to speak to any Puppet Master because 
essentially it's the same CA (in multiple places). The issue is if you make use 
of the certificate revocation list to deny agents - because each CA could 
potentially issue the same serial number, this is not going to work. If you 
don't rely on the revocation list then this is not an issue. All this may have 
changed over the years.

Another way to do it is have a central signer (you can split the CA 
functionality from other parts of the Master in puppet.conf) and then sync the 
signed certs to each Master. That way if your central CA goes down you can't 
build any *new* agents, but your existing nodes will work because the Puppet 
Masters at each DC have a copy of the signed certificates. The revocation list 
works with this approach. That may satisfy your "DCs running independently" 
requirement.

Question: if your DCs are moving to be a stand alone architecture, why do you 
need your Agents to check in to other Masters? Why not just have a CA per DC? 
The obvious down side is if your DC's Puppet Master goes down you can't do any 
Puppet runs in that DC, but if you've got multiple anyway I'll assume your 
Masters are deployed with Puppet themselves, so shouldn't be that hard to 
recover / rebuild?

--
Luke Bigum
Senior Systems Engineer

Information Systems

- Original Message -
From: "Peter Berghold" 
To: "puppet-users" 
Sent: Wednesday, 8 June, 2016 15:40:19
Subject: [Puppet Users] Multiple CA setup.

In the puppet setup that I have where I work it has been increasingly more
desirable if not required to have each of our data centers be able to
operate standalone. Because of this I've been Googling around looking for a
methodology to allow multiple certificate authorities in puppet. Currently
we have our grand master puppet server in one Data Center and we have
several Puppet Masters in other data centers in geographically diverse
areas. When a new client is added with our current setup that new client
has to reach out and get it certificate signed by The Grandmaster. This is
getting us through setting up puppet currently but long-term this is
undesirable.

Can anybody point me to a methodology for setting up multiple certificate
authorities that actually works? Looks like the pages on the topic I have
read so far are outdated.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CAArvnv2OQP5QcG9TTy_EVTursMkUdW2MhB7%3D_ZPiH7XnQ1mWrQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
---

LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN
http://www.LMAX.com/

Recognised by the most prestigious business and technology awards
 
2016 Best Trading & Execution, HFM US Technology Awards
2016, 2015, 2014, 2013 Best FX Trading Venue - ECN/MTF, WSL Institutional 
Trading Awards

2015 Winner, Deloitte UK Technology Fast 50
2015, 2014, 2013, One of the UK's fastest growing technology firms, The Sunday 
Times Tech Track 100
2015 Winner, Deloitte EMEA Technology Fast 500
2015, 2014, 2013 Best Margin Sector Platform, Profit & Loss Readers' Choice 
Awards

---

FX and CFDs are leveraged products that can result in losses exceeding your 
deposit. They are not suitable for everyone so please ensure you fully 
understand the risks involved.

This message and its attachments are confidential, may not be disclosed or used 
by any person other than the addressee and are intended only for the named 
recipient(s). This message is not intended for any recipient(s) who based on 
their nationality, place of business, domicile or for any other reason, is/are 
subject to local laws or regulations which prohibit the provision of such 
products and services. This message is subject to the following terms 
(http://lmax.com/pdf/general-disclaimers.pdf), if you cannot access these, 
please notify us by replying to this email and we will send you the terms. If 
you are not the intended recipient, please notify the sender immediately and 
delete any copies of this message.

LMAX Exchange is the trading name of LMAX Limited. LMAX Limited operates a 
multilateral trading facility. LMAX Limited is authorised and regulated by the 
Financial Conduct Authority (firm registration number 509778) and is a company 
registered in England and Wales (number 650580

Re: [Puppet Users] merge hashes and create_resources

2016-06-06 Thread Luke Bigum
In Puppet 3.x you have to use the Define Wrapper "trick". It's not pretty, but 
without lambda functions it's all that you've got. If it's 4.x, see Henrik's 
post before.




$input_base => {
  input1 => { 'port' => '2001', 'component' => 'component1' },
  input2 => { 'port' => '2002', 'component' => 'component2' }
}
$input_node => {
  input3 => { 'port' => '2003', 'component' => 'component3' },
  [possibly input4... inputN]
}
$input_merged = merge($input_base, $input_node)
file { 'input_merged_file':
  content => template("template_that_uses_input_merged")
}

#Uses $port and $component params in individual File resources
define input_node_wrapper ($port, $component) {
  file { "a_file_for_input_node_${name}":
path=> 'somewhere',
content => 'some_other_template',
  }
}
create_resources('input_node_wrapper', $input_merged)



--
Luke Bigum
Senior Systems Engineer

Information Systems

- Original Message -
From: "Robert Poulson" 
To: "puppet-users" 
Sent: Sunday, 5 June, 2016 18:56:48
Subject: [Puppet Users] merge hashes and create_resources

Dear List,

I've been using Puppet for over a year now and I'm quite enjoying it. I've
learned some stuff but there is of course always room for improvement. Now
I have a task which needs a nicer solution than I'm currently capable of.

I have a hash of items with to key/value pairs, which is *the same for
every node*:

$input_base => {
  input1 => { 'port' => '2001', 'component' => 'component1' },
  input2 => { 'port' => '2002', 'component' => 'component2' }
}

Then I have a second hash, which is different for every node.

$input_node => {
  input3 => { 'port' => '2003', 'component' => 'component3' },
  [possibly input4... inputN]
}

These all will be used in a single template. So I can simply do:

$input_merged = merge($input_base, $input_node)

The corresponding port/component entries will then be added in a
configuration file with an $input_merged.each_pair - so far so good.



Now the actual task: I need an extra configfile, generated from a template,
for all the inputN elements of the $input_node - but only for them, not for
the $input_base elements - like this:

* /path/to/project_input3.conf
* [/path/to/project_input4.conf...project_inputN.conf]

This would be possible with create_resources:

create_resources(file, $input_merged)

but in order to do this, $input_merged should have the values of a file
resource - at least "path" and "source => template()". This is not the case.

I could define $input_node initially as a file resource hash - but in this
case I can't merge it anymore with $input_base.

Currently I have no other idea than manually map the input_node elements
into a third hash, and use that with create_resources, but there should be
a nicer solution. Do you have an idea? :-)


Best,
Rp

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CANwwCtzSGFSUaJsraux2sAifauq%3D9%2BHuZT-kt6jpUBJWnvVZ%3DQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
---

LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN
http://www.LMAX.com/

Recognised by the most prestigious business and technology awards
 
2016 Best Trading & Execution, HFM US Technology Awards
2016, 2015, 2014, 2013 Best FX Trading Venue - ECN/MTF, WSL Institutional 
Trading Awards

2015 Winner, Deloitte UK Technology Fast 50
2015, 2014, 2013, One of the UK's fastest growing technology firms, The Sunday 
Times Tech Track 100
2015 Winner, Deloitte EMEA Technology Fast 500
2015, 2014, 2013 Best Margin Sector Platform, Profit & Loss Readers' Choice 
Awards

---

FX and CFDs are leveraged products that can result in losses exceeding your 
deposit. They are not suitable for everyone so please ensure you fully 
understand the risks involved.

This message and its attachments are confidential, may not be disclosed or used 
by any person other than the addressee and are intended only for the named 
recipient(s). This message is not intended for any recipient(s) who based on 
their nationality, place of business, domicile or for any other reason, is/are 
subject to local laws or regulations which prohibit the provision of such 
products and services. This message is subject to the follow

Re: [Puppet Users] Strategies for "boring" packages

2016-04-19 Thread Luke Bigum
In my mind the "purest" way would be to go individual modules for each 
package/service combination. If the only requirement is that you are handling 
the differences between Red Hat and Debian flavours, then a module per 
package/service. These modules would be wholly self contained and rely on some 
of the standard set of Facter facts. And then you could publish them :-) It 
would also avoid future duplicate resource declarations where someone's 
embedded "packageX" into one profile, and it clashes with "packageX" in another 
profile.

I can see the argument for putting package installs and service starts into a 
profile but only if it's global for every operating system. So if there was 
profile::webserver that needed Package[openssl] and that was correct for all 
operating systems, then fine. However if you have to start doing conditional 
logic to find the right name of Package[openssl] for Red Hat and Debian, then 
profile::webserver is not the place. profile::webserver is a container of 
business logic that relates wholly and only to your business and your team. The 
exact implementation of Package[openssl] has nothing to do with 
profile::webserver, as long as openssl gets there somehow, that should be all 
you care about at the Profile level. Implementing Package[openssl] really 
depends on the operating system Facts alone, and this should be in it's own 
module... and... all of a sudden your profile::webserver is operating system 
agnostic, which is cool.

Question - why is it taking your team getting annoyed at generating boilerplate 
code? Surely you have some sort of "puppet module create" wrapper script or you 
use https://github.com/voxpupuli/modulesync? If you've got so much overhead 
adding boiler plate code for your boring modules then I think you're tackling 
the wrong problem... If you can bring the boiler plate code problem down to 1-2 
minutes, it's got to only take another 5-10 minutes tops to refactor one 
package{} and one service{} resource out of the profile and into it's own 
module, and then your team argument kind of goes away.

Question - why are you writing 120 modules yourself? Are there really no other 
implementations of these things on the Forge or GitHub?

--
Luke Bigum


- Original Message -
From: "J.T. Conklin" 
To: "puppet-users" 
Sent: Tuesday, 19 April, 2016 01:47:37
Subject: [Puppet Users] Strategies for "boring" packages

At work, we've written about 120 modules in our puppet code repository.
About two dozen are "interesting", in that they have lots of parameters
and configuration that is specific to our environment.  The balance are
"boring", rather they are mostly boilerplate with minimal configuration.
For example, our modules abstract the differences in package and service
names between RedHat and Debian based systems.

However, there is some disagreement amongst our puppeteers about how to
handle these "boring" modules. One side objects to the amount of boiler-
plate and duplication, and would prefer that we simply define packages
in our role/profile modules. The other side claims that abstracting
package and service names is value enough to justify the overhead, and
that "boring" packages often become "interesting" over time as new
requirements for flexibility and customization develop over time. Each
group is firmly convinced that their opinion is the right one.

So I throw the question to the puppet community... What strategies do
you use for "boring" modules so you're not overwhelmed by hundreds of
small boilerplate modules?

Thanks for sharing,

--jtc

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/87shyio252.fsf%40wopr.acorntoolworks.com.
For more options, visit https://groups.google.com/d/optout.
---

LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN
http://www.LMAX.com/

Recognised by the most prestigious business and technology awards
 
2016 Best Trading & Execution, HFM US Technology Awards
2016, 2015, 2014, 2013 Best FX Trading Venue - ECN/MTF, WSL Institutional 
Trading Awards

2015 Winner, Deloitte UK Technology Fast 50
2015, 2014, 2013, One of the UK's fastest growing technology firms, The Sunday 
Times Tech Track 100
2015 Winner, Deloitte EMEA Technology Fast 500
2015, 2014, 2013 Best Margin Sector Platform, Profit & Loss Readers' Choice 
Awards

---

FX and CFDs are leveraged products that can result in losses exceeding your 
deposit. They are not suitable for everyone so please ensure you fully 
understand the risks involved.

This message and its 

Re: [Puppet Users] directory environemnt doesn't seem to be working for vcsrepo

2016-03-15 Thread Luke Bigum
The error itself is quite clear - you've got an empty string for the parameter 
'revision' when you should have something that's not whitespace (according to 
that regex).

To actually figure out why you're getting an empty string, you're going to need 
to post the relevant portions of app.pp (or it's entirety).

--
Luke Bigum

- Original Message -
From: "Sans" 
To: "puppet-users" 
Sent: Monday, 14 March, 2016 21:40:34
Subject: [Puppet Users] directory environemnt doesn't seem to be working for 
vcsrepo

Hi there ,

I'm seeing a very strange error, which I cannot figure out where it's 
coming from:


*Error: Failed to apply catalog: Parameter revision failed on 
Vcsrepo[/var/www/wp007/wp-content]: Invalid value "". Valid values match 
/^\S+$/. at /usr/local/p19/puppet/modules/wordpress/manifests/app.pp:163*


Line #163  is where I specified the vcsrepo to do the git pull from staging 
branch. I cannot get any other info using -td or --trace. Any one has seen 
this error before or know what's going on?
Just to give you a but of background, the PuppetMaster is running with two 
environments: development and staging, with directory environment enabled. 
This error is coming from the staging instances.  Is it possible it's not 
getting the environment specific values to compile the catalog? Hoe do I do 
further debugging? 

Got really stuck in the middle, so nay help will be appreciated.

-San

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/13af960b-eaa6-4944-9800-b36a4ebb0e4b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
---

LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN
http://www.LMAX.com/

Recognised by the most prestigious business and technology awards
 
2016 Best Trading & Execution, HFM US Technology Awards
2016, 2015, 2014, 2013 Best FX Trading Venue - ECN/MTF, WSL Institutional 
Trading Awards

2015 Winner, Deloitte UK Technology Fast 50
2015, 2014, 2013, One of the UK's fastest growing technology firms, The Sunday 
Times Tech Track 100
2015 Winner, Deloitte EMEA Technology Fast 500
2015, 2014, 2013 Best Margin Sector Platform, Profit & Loss Readers' Choice 
Awards

---

FX and CFDs are leveraged products that can result in losses exceeding your 
deposit. They are not suitable for everyone so please ensure you fully 
understand the risks involved.

This message and its attachments are confidential, may not be disclosed or used 
by any person other than the addressee and are intended only for the named 
recipient(s). This message is not intended for any recipient(s) who based on 
their nationality, place of business, domicile or for any other reason, is/are 
subject to local laws or regulations which prohibit the provision of such 
products and services. This message is subject to the following terms 
(http://lmax.com/pdf/general-disclaimers.pdf), if you cannot access these, 
please notify us by replying to this email and we will send you the terms. If 
you are not the intended recipient, please notify the sender immediately and 
delete any copies of this message.

LMAX Exchange is the trading name of LMAX Limited. LMAX Limited operates a 
multilateral trading facility. LMAX Limited is authorised and regulated by the 
Financial Conduct Authority (firm registration number 509778) and is a company 
registered in England and Wales (number 6505809).

LMAX Hong Kong Limited is a wholly-owned subsidiary of LMAX Limited. LMAX Hong 
Kong is licensed by the Securities and Futures Commission in Hong Kong to 
conduct Type 3 (leveraged foreign exchange trading) regulated activity with CE 
Number BDV088.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/1424013521.5311651.1458031228363.JavaMail.zimbra%40lmax.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: Making a "role" fact work

2016-01-29 Thread Luke Bigum
This might be relevant:

https://groups.google.com/forum/#!searchin/puppet-users/luke$20bigum|sort:date/puppet-users/XWAcm152cyQ/P_rpi50XBAAJ

The ENC above inserts a top scope variable into a node's manifest, designed 
to be used as a "role" Fact. It reads from one of two YAML files, either 
explicit hostnames or hostname regex matches. It might meet your 
requirement half way before you get a "proper" ENC in place.

On Friday, 29 January 2016 15:50:50 UTC, Gareth Humphries wrote:
>
> ENC is the end game, but we have legacy hosts this has to work on.  Right 
> now I have site.pp which has a list of unpleasant regexes and an 'include 
> role::' stanza for each.  I could put '$role= ; 
> include role::$role' in each of them instead, but I would have to do that 
> in every single case, which I'm trying to avoid.
>
> External facts work, but not on the first run.  Because facts get loaded 
> before catalog compilation, the host doesn't know what to set that fact to 
> until it already has a catalog -  a little bit chicken-and-egg.
> If i'm relying on the role fact to get data out of hiera, I need that fact 
> available first run or compilation will fail.
>
>
> Perhaps the set-the-variable-everywhere approach is going to be the 
> solution, i was just hoping to find a way that doesn't require that.
>
>
> On Friday, January 29, 2016 at 3:20:20 PM UTC, jcbollinger wrote:
>>
>>
>>
>> On Friday, January 29, 2016 at 3:29:29 AM UTC-6, Gareth Humphries wrote:
>>>
>>> Thanks Gav,
>>>
>>> It's a good idea, though on the surface I don't think it will work for 
>>> us (we're trying to spin stuff up from a gold image using an ENC, and I 
>>> think having an extra magic step in between is a greater evil than explicit 
>>> declarations), but you've got me thinking down some other lines of 
>>> automation.
>>>
>>
>>
>> If you're relying on an ENC, then isn't that ENC assigning roles to 
>> machines?  In that case, why do you need a fact?  The ENC can inject 
>> top-level variables that you can use exactly as you would use facts.
>>
>> More generally, if you have centralized knowledge of what role each 
>> machine should have, then you should serve it centrally instead of storing 
>> it on nodes and relying on them to feed it back correctly.  Hiera would be 
>> another option for doing that.
>>
>> I'm really having trouble understanding why you are approaching the 
>> problem as you describe.  If indeed you have a way to assign a role class 
>> to your nodes without relying on the fact you're trying to create, then I 
>> don't see why you need the fact.  Moreover, I don't see why you need to 
>> analyze catalogs to extract the value for such a fact, instead of making 
>> Puppet itself manage the fact.  If you rely on external facts 
>>  
>> for the purpose, you can have your role classes manage an ordinary file on 
>> agent-side file system to make that node provide any given fact on 
>> subsequent runs.  As a side effect, this could even prevent assigning 
>> multiple role classes to the same node.  Indeed, this is one path by which 
>> you could arrange for nodes to provide the desired fact with their very 
>> first catalog request, though again, I don't understand what purpose the 
>> fact is supposed to serve.
>>
>>
>>
>> John
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/58a6cb9f-3a02-4bc7-944d-f9cc32f4c1c0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] user and service interdependencies

2015-12-11 Thread Luke Bigum
[root@localhost ~]# usermod -d /home/foo test
usermod: user test is currently used by process x

Huh, learn something new every day :-)

This is a feature of the provider, usermod, to be honest I think anything you 
try do here with Puppet is going to be pretty nasty. You could do this:

user { 'tools':
  ensure => present,
  home   => '/apps/tools',
}
exec { 'fix_home':
  command => '/usr/local/bin/fix_home.sh',
  unless  => "grep '/apps/tools' /etc/passwd",
}

Which is very specific and pretty horrid. I think you're trying to solve this 
in the wrong way though. What needs to happen is some sort of 
migration/upgrade/downtime (execute a series of commands in sequence), and 
Puppet is not a very good tool for that. Instead I would invest time into 
ensuring you don't get into this problem in the first place:

define user_and_service($username, $service, $homedir) {
  validate_string($username)
  validate_string($homedir)
  user { $username:
home => $homedir,
  }
  service { $service:
ensure  => 'running',
require => User[$username],
  }
}

It's a bit of a dumb example, but illustrates the idea that you can't do what 
has happened to you without setting a certain parameter.

--
Luke Bigum
Senior Systems Engineer

Information Systems

- Original Message -
From: "Vadym Chepkov" 
To: puppet-users@googlegroups.com
Sent: Friday, 11 December, 2015 12:27:34 PM
Subject: [Puppet Users] user and service interdependencies

Hi,

How would one gracefully solve a problem like this, we are facing time to time, 
mostly in the development cycle
Lets say somebody wrote code like this (for simplicity):

user { 'tools':
  ensure => present,
} 

service { 'tools':
  ensure => 'running',
  require => User['tools'],
}

It ran, created user, started service. Then somebody realized, hey, I didn't 
set home for the user, and update the code
user { 'tools':
  ensure => present,
  home => '/apps/tools',
} 

Problem here, puppet can't apply this code anymore, since 'user' resource will 
call usermod command and it refuses update user's home if a process running.
Service needs to be stopped, user modified and then service started.  One could 
create an exec, to stop the service, but how to 'notify' this exec, I can't 
figure out.

Thanks,
Vadym




-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/F3D433F0-238D-448F-8B82-1703DFA2974F%40gmail.com.
For more options, visit https://groups.google.com/d/optout.
---

LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN
http://www.LMAX.com/

#1 Fastest Growing Tech Company in the UK - Sunday Times Tech Track 100 (2014) 

2015 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2015 Best FX Trading Venue - ECN/MTF - WSL Institutional Trading Awards
2014 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2014 Best FX Trading Venue - ECN/MTF - WSL Institutional Trading Awards
2014 Best Infrastructure/Technology Initiative - WSL Institutional Trading 
Awards
2013 #15 Fastest Growing Tech Company in the UK - Sunday Times Tech Track 100
2013 Best Overall Testing Project - The European Software Testing Awards
2013 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2013 Best FX Trading Platform - ECN/MTF - WSL Institutional Trading Awards
2013 Best Executing Venue - Forex Magnates Awards

---

FX and CFDs are leveraged products that can result in losses exceeding your 
deposit. They are not suitable for everyone so please ensure you fully 
understand the risks involved.

This message and its attachments are confidential, may not be disclosed or used 
by any person other than the addressee and are intended only for the named 
recipient(s). This message is not intended for any recipient(s) who based on 
their nationality, place of business, domicile or for any other reason, is/are 
subject to local laws or regulations which prohibit the provision of such 
products and services. This message is subject to the following terms 
(http://lmax.com/pdf/general-disclaimers.pdf), if you cannot access these, 
please notify us by replying to this email and we will send you the terms. If 
you are not the intended recipient, please notify the sender immediately and 
delete any copies of this message.

LMAX Exchange is the trading name of LMAX Limited. LMAX Limited operates a 
multilateral trading facility. LMAX Limited is authorised and regulated by the 
Financial Conduct Authority (firm registration

Re: [Puppet Users] Custom facts per node.. only via /etc/facter/facts.d/fact_xyz.txt per node?

2015-12-04 Thread Luke Bigum
An ENC can be used to insert a sort of "Puppet Master defined Fact" (which is 
actually a top scope variable/parameter) before catalog compilation. Note that 
this will never appear in "facter -p" on the Agents, because it's not really a 
Fact.

Code:

https://gist.github.com/lukebigum/20231e70545a298b7dc5

And the data file looks like:

[root@master ~]# head -n10 /etc/puppet/roles.yaml 
#Managed by Puppet
---
host.example.com:
  parameters:
role: woof
server.example.com:
  parameters:
role: cows

--
Luke Bigum
Senior Systems Engineer

Information Systems


- Original Message -
From: "Hubert Schmoll" 
To: "Puppet Users" 
Cc: s...@tetralog.de
Sent: Friday, 4 December, 2015 9:31:33 AM
Subject: [Puppet Users] Custom facts per node.. only via 
/etc/facter/facts.d/fact_xyz.txt per node?

Hello everyone,

here's what i wanna accomplish: 
i am havin 4 so called servergroups

- ci
- dev
- stage
- production

and i want them to have specific facts depending on the group they are in. 
so in every node i created a file /etc/facter/facts.d/servergroup.txt, 
containing e.g.
*servergroup=stage* 

where then a 

*facter -d servergroup *in the clienthost
gives me an:

*stage *

on puppet master, my hiera.yaml looks like this:

:hierarchy:
  - "servergroup/%{::servergroup}"
  - "nodes/%{::clientcert}"
  - common

and i've got on my hieradata a folder servergroup with an file stage.yaml, 
which, then again, holds:

message: '***   Preparing STAGE environment   ***'
repo_stage_enabled: '1'
repo_prod_enabled:  '0'
nrpe_allowed_hosts: '192.168.3.4'

my module which installes a nrpe uses then these values. like:

$nrpe_allowed_hosts = hiera('nrpe_allowed_hosts')


so far this works, every stage node gets this values, productive of ci 
nodes get different values. all good.
What  i do not like is that i have on each node to create the file 
"/etc/facter/facts.d/servergroup.txt".

I'd rather like to have this information which servergroup each node 
belongs on my puppetmaster. 

Any ideas? Use ENC? Create kinda custom facts on the puppet master ?

Thanks and best regards

Hubert

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/07ef5bc5-a62a-4b0a-950e-50f0d16b3491%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
---

LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN
http://www.LMAX.com/

#1 Fastest Growing Tech Company in the UK - Sunday Times Tech Track 100 (2014) 

2015 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2015 Best FX Trading Venue - ECN/MTF - WSL Institutional Trading Awards
2014 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2014 Best FX Trading Venue - ECN/MTF - WSL Institutional Trading Awards
2014 Best Infrastructure/Technology Initiative - WSL Institutional Trading 
Awards
2013 #15 Fastest Growing Tech Company in the UK - Sunday Times Tech Track 100
2013 Best Overall Testing Project - The European Software Testing Awards
2013 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2013 Best FX Trading Platform - ECN/MTF - WSL Institutional Trading Awards
2013 Best Executing Venue - Forex Magnates Awards

---

FX and CFDs are leveraged products that can result in losses exceeding your 
deposit. They are not suitable for everyone so please ensure you fully 
understand the risks involved.

This message and its attachments are confidential, may not be disclosed or used 
by any person other than the addressee and are intended only for the named 
recipient(s). This message is not intended for any recipient(s) who based on 
their nationality, place of business, domicile or for any other reason, is/are 
subject to local laws or regulations which prohibit the provision of such 
products and services. This message is subject to the following terms 
(http://lmax.com/pdf/general-disclaimers.pdf), if you cannot access these, 
please notify us by replying to this email and we will send you the terms. If 
you are not the intended recipient, please notify the sender immediately and 
delete any copies of this message.

LMAX Exchange is the trading name of LMAX Limited. LMAX Limited operates a 
multilateral trading facility. LMAX Limited is authorised and regulated by the 
Financial Conduct Authority (firm registration number 509778) and is a company 
registered in England and Wales (number 6505809).

LMAX Hong Kong Limited is a wholly-owned subsidiary of LMAX Limited. LMAX Hong 
Kong is licensed by the Securities and Futures Commission in Hong Kong to 
condu

Re: [Puppet Users] Using hiera to configure the jgazeley/ossec module

2015-09-09 Thread Luke Bigum
I would isolate the Hiera lookup first, run it by hand on your Puppet Master:

  hiera --debug -c /etc/puppetlabs/code/hiera.yaml -y $(puppet config print 
yamldir)/facts/vm.yaml ossec::client::ossec_server_ip ::environment=development

The above assumes the certname of your node is actually "vm", it's probably 
not, so change the path to your node's YAML Facts cache.

--
Luke Bigum

- Original Message -
From: "Todd Courtnage" 
To: "Puppet Users" 
Sent: Tuesday, 8 September, 2015 10:49:22 PM
Subject: [Puppet Users] Using hiera to configure the jgazeley/ossec module

I'm in the process of refactoring our puppet to make use of r10k, hiera and 
roles/profiles, as seems to be the suggested methodology these days.

I've successfully got the ericsson/motd and puppetlabs/apt modules up and 
running and configured with appropriate with hiera, doing what I want in 
various environments.

I'm trying (so far unsuccessfully) to use hiera to configure this module 
(https://forge.puppetlabs.com/jgazeley/ossec). r10k is configured to pull 
in the ossec module and dependencies (which it has).

I'm simply attempting to set the ossec_server_ip parameter in the 
ossec::client class, but all I ever get is a "Must pass ossec_server_ip to 
Class[Ossec::Client]..." error. I get this from running a "puppet agent 
--test--noop" on an agent or running a puppet apply directly on the module. 
I feel like this should be incredibly simple and that I'm just missing 
something completely obvious.

This is with the open-source puppetserver 4.2.1 on Ubuntu 14.04, with an 
agent running puppet 4.2.1 as well.

/etc/puppetlabs/code/hiera.yaml (unchanged from default)
---
:backends:
  - yaml
:hierarchy:
  - "nodes/%{::trusted.certname}"
  - common

:yaml:
# datadir is empty here, so hiera uses its defaults:
# - /etc/puppetlabs/code/environments/%{environment}/hieradata on *nix
# - %CommonAppData%\PuppetLabs\code\environments\%{environment}\hieradata 
on Windows
# When specifying a datadir, make sure the directory exists.
  :datadir:

/etc/puppetlabs/code/environments/development/hieradata/common.yaml:
---
classes:
  - 'profile::base'

motd::motd_content:
  - 'This is a development environment. Booya!'

apt::purge:
  sources.list.d: true
apt::update:
  frequency: daily

ossec::client::ossec_server_ip: 


/etc/puppetlabs/code/environments/development/site/profile/manifests/base.pp
class profile::base {
  class { '::motd': }
  class { '::apt': }
  class { '::ossec::client': }
}


Trying to "make it go":
root@vm:/etc/puppetlabs/code/environments# puppet agent --test --noop
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: 
Must pass ossec_server_ip to Class[Ossec::Client] at /etc/puppetlabs/code/
environments/development/site/profile/manifests/base.pp:4 on node puppet.
development.vm
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

Any help/suggestions/pointers greatly appreciated.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/5a2a8004-ae32-4af1-88e1-3fe1da352167%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
---

LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN
http://www.LMAX.com/

#1 Fastest Growing Tech Company in the UK - Sunday Times Tech Track 100 (2014) 

2015 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2015 Best FX Trading Venue - ECN/MTF - WSL Institutional Trading Awards
2014 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2014 Best FX Trading Venue - ECN/MTF - WSL Institutional Trading Awards
2014 Best Infrastructure/Technology Initiative - WSL Institutional Trading 
Awards
2013 #15 Fastest Growing Tech Company in the UK - Sunday Times Tech Track 100
2013 Best Overall Testing Project - The European Software Testing Awards
2013 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2013 Best FX Trading Platform - ECN/MTF - WSL Institutional Trading Awards
2013 Best Executing Venue - Forex Magnates Awards

---

FX and CFDs are leveraged products that can result in losses exceeding your 
deposit. They are not suitable for everyone so please ensure you fully 
understand the risks involved.

This message and its attachments are confidential, may not be disclosed or used 
by any person other than the addressee and are intended only for the named 
recipient(s). This message is not intended for any recipient(s) who based on 
their nation

Re: [Puppet Users] Re: Hiera auto binding

2015-07-09 Thread Luke Bigum
Woops, an amendment to look up the 'data' parameter of class 'foo' in Hiera:

# hiera -c /etc/puppet/hiera.yaml -y 
/var/lib/puppet/yaml/facts/nodename.domain.yaml environment=production foo::data


- Original Message -----
From: "Luke Bigum" 
To: puppet-users@googlegroups.com
Sent: Thursday, 9 July, 2015 9:38:29 AM
Subject: Re: [Puppet Users] Re: Hiera auto binding

Hi DJ,

In general the more Hiera calls you make the slower your manifests will 
compile. However, the difference between 1 and 10 is negligible, between 1 and 
a 1000 you might loose a few seconds. If you use hiera-gpg it will take a 
little longer (hiera-eyaml should be faster), and if you add more levels of 
depth it will take a little longer as well. As for the time difference between 
a data binding hiera lookup and a in-manifest function call, I'd say the 
difference is pretty much zero.

As for tracing where the data is coming from, it will only be coming from one 
of three places. The highest priority is an explicit value passed to class 
parameter:

class { 'foo':
  data => "explicit",
}

The second highest priority is a Hiera data binding, and you can use Hiera on 
the command line to figure that out:

# hiera -c /etc/puppet/hiera.yaml -y 
/var/lib/puppet/yaml/facts/nodename.domain.yaml environment=production data

And the lowest priority is a class parameter default:

class foo($data = "defaultstring") {...}

--
Luke Bigum


- Original Message -
From: "DJ" 
To: puppet-users@googlegroups.com
Sent: Wednesday, 8 July, 2015 4:59:22 PM
Subject: [Puppet Users] Re: Hiera auto binding

Sorry correction, it's "Data binding" 

On Wednesday, 8 July 2015 21:27:44 UTC+5:30, DJ wrote:
>
> Hello,
>
> i was reading this doc "
> http://garylarizza.com/blog/2014/10/24/puppet-workflows-4-using-hiera-in-anger";
>  
> which says it's not good idea to use Hiera auto binding feature, can you 
> guys suggest if you are using this feature and you have noticed any 
> performance issues or any issues related to not able to find from where 
> data is coming?
>
> Any feedback please.
>
> Regards,
> DJ
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/44f2c779-5b72-4c2e-870a-477e6cc31935%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
---

LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN
http://www.LMAX.com/

#1 Fastest Growing Tech Company in the UK - Sunday Times Tech Track 100 (2014) 

2015 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2015 Best FX Trading Venue - ECN/MTF - WSL Institutional Trading Awards
2014 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2014 Best FX Trading Venue - ECN/MTF - WSL Institutional Trading Awards
2014 Best Infrastructure/Technology Initiative - WSL Institutional Trading 
Awards
2013 #15 Fastest Growing Tech Company in the UK - Sunday Times Tech Track 100
2013 Best Overall Testing Project - The European Software Testing Awards
2013 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2013 Best FX Trading Platform - ECN/MTF - WSL Institutional Trading Awards
2013 Best Executing Venue - Forex Magnates Awards

---

FX and CFDs are leveraged products that can result in losses exceeding your 
deposit. They are not suitable for everyone so please ensure you fully 
understand the risks involved.

This message and its attachments are confidential, may not be disclosed or used 
by any person other than the addressee and are intended only for the named 
recipient(s). This message is not intended for any recipient(s) who based on 
their nationality, place of business, domicile or for any other reason, is/are 
subject to local laws or regulations which prohibit the provision of such 
products and services. This message is subject to the following terms 
(http://lmax.com/pdf/general-disclaimers.pdf), if you cannot access these, 
please notify us by replying to this email and we will send you the terms. If 
you are not the intended recipient, please notify the sender immediately and 
delete any copies of this message.

LMAX Exchange is the trading name of LMAX Limited. LMAX Limited operates a 
multilateral trading facility. LMAX Limited is authorised and regulated by the 
Financial Conduct Authority (firm registration number 509778) and is a company 
registered in England and Wales (number 6505809).

LMAX Hong Kong Limited is a wholly-owned subsidiary of LMAX Limited. LMAX Hong 
Kong is licensed by the Securities and Futures Commission in Hong Kong to 
conduct Type 3 (le

Re: [Puppet Users] Re: Hiera auto binding

2015-07-09 Thread Luke Bigum
Hi DJ,

In general the more Hiera calls you make the slower your manifests will 
compile. However, the difference between 1 and 10 is negligible, between 1 and 
a 1000 you might loose a few seconds. If you use hiera-gpg it will take a 
little longer (hiera-eyaml should be faster), and if you add more levels of 
depth it will take a little longer as well. As for the time difference between 
a data binding hiera lookup and a in-manifest function call, I'd say the 
difference is pretty much zero.

As for tracing where the data is coming from, it will only be coming from one 
of three places. The highest priority is an explicit value passed to class 
parameter:

class { 'foo':
  data => "explicit",
}

The second highest priority is a Hiera data binding, and you can use Hiera on 
the command line to figure that out:

# hiera -c /etc/puppet/hiera.yaml -y 
/var/lib/puppet/yaml/facts/nodename.domain.yaml environment=production data

And the lowest priority is a class parameter default:

class foo($data = "defaultstring") {...}

--
Luke Bigum


- Original Message -
From: "DJ" 
To: puppet-users@googlegroups.com
Sent: Wednesday, 8 July, 2015 4:59:22 PM
Subject: [Puppet Users] Re: Hiera auto binding

Sorry correction, it's "Data binding" 

On Wednesday, 8 July 2015 21:27:44 UTC+5:30, DJ wrote:
>
> Hello,
>
> i was reading this doc "
> http://garylarizza.com/blog/2014/10/24/puppet-workflows-4-using-hiera-in-anger";
>  
> which says it's not good idea to use Hiera auto binding feature, can you 
> guys suggest if you are using this feature and you have noticed any 
> performance issues or any issues related to not able to find from where 
> data is coming?
>
> Any feedback please.
>
> Regards,
> DJ
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/44f2c779-5b72-4c2e-870a-477e6cc31935%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
---

LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN
http://www.LMAX.com/

#1 Fastest Growing Tech Company in the UK - Sunday Times Tech Track 100 (2014) 

2015 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2015 Best FX Trading Venue - ECN/MTF - WSL Institutional Trading Awards
2014 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2014 Best FX Trading Venue - ECN/MTF - WSL Institutional Trading Awards
2014 Best Infrastructure/Technology Initiative - WSL Institutional Trading 
Awards
2013 #15 Fastest Growing Tech Company in the UK - Sunday Times Tech Track 100
2013 Best Overall Testing Project - The European Software Testing Awards
2013 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2013 Best FX Trading Platform - ECN/MTF - WSL Institutional Trading Awards
2013 Best Executing Venue - Forex Magnates Awards

---

FX and CFDs are leveraged products that can result in losses exceeding your 
deposit. They are not suitable for everyone so please ensure you fully 
understand the risks involved.

This message and its attachments are confidential, may not be disclosed or used 
by any person other than the addressee and are intended only for the named 
recipient(s). This message is not intended for any recipient(s) who based on 
their nationality, place of business, domicile or for any other reason, is/are 
subject to local laws or regulations which prohibit the provision of such 
products and services. This message is subject to the following terms 
(http://lmax.com/pdf/general-disclaimers.pdf), if you cannot access these, 
please notify us by replying to this email and we will send you the terms. If 
you are not the intended recipient, please notify the sender immediately and 
delete any copies of this message.

LMAX Exchange is the trading name of LMAX Limited. LMAX Limited operates a 
multilateral trading facility. LMAX Limited is authorised and regulated by the 
Financial Conduct Authority (firm registration number 509778) and is a company 
registered in England and Wales (number 6505809).

LMAX Hong Kong Limited is a wholly-owned subsidiary of LMAX Limited. LMAX Hong 
Kong is licensed by the Securities and Futures Commission in Hong Kong to 
conduct Type 3 (leveraged foreign exchange trading) regulated activity with CE 
Number BDV088.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/1272298899.1210413.1436431109017.JavaMail.zimbra%40lmax.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Node key merging/overloading - node inheritance vs hiera

2015-03-11 Thread Luke Bigum
- Original Message -
> From: "Christopher Wood" 
> >Puppet in fact provides three functions functions for lookups: there is
> >also hiera_hash().
> > 
> >In any case, you are quite right.  Which sort of lookup is intended is
> >an
> >attribute of the data -- part of the definition of each key -- but it is
> >not represented in or alongside the data.  Each user of the data somehow
> >has to know.  That could be tolerated, inconvenient as it is, except
> >that
> >it is incompatible with automated data binding.  This is an issue that
> >has
> >been recognized and acknowledged, though I'm uncertain whether it is
> >actively being addressed.
> 
> Could you possibly expound on the "Each user of the data somehow has to know"
> part? I'm having trouble with the notion that people would use puppet
> manifests and hiera data without knowing what's in them.

I can't speak for John but I think I get his meaning, but if I don't, here's my 
own opinion ;-)

If a user of a module is reading that module's documentation and parameters, it 
seems a bit nasty to assume they user must also go read the Puppet module code 
in great detail to find out what type of Hiera call is being used.  Passing 
data to the module should be simply defined, eg: "this parameter takes an 
array" or "this parameter is a comma separated string".  For a module to assume 
that it can or should attempt to do some sort of deep merging seems overly 
complicated and it shifts the focus away from the user providing the right data 
to a well written module. Rather than have "classname::merge => true" I would 
advocate something like this which puts the user in complete control of the 
data reaching it's modules in a correct and easily testable manner:


class 'profile::dns' {
  #lookup my DNS data
  $hiera_dns_server_array = hiera_array('dns::server')
  $common_dns_server = '127.0.0.1'
  
  class { 'resolv':
dns_servers => [ $hiera_dns_server_array, $common_dns_server ]
}


Something like this seems like I'm telling a module *how* to look up my own 
data, rather than passing the right data to the module:


class resolv (
  $dns_servers_key_name = 'dns_servers',
  $dns_servers_key_merge = false,
) {
  if ($dns_servers_key_merge) {
$dns_servers = hiera_array($dns_servers_key_name)
  } else {
$dns_servers = hiera($dns_servers_key_name)
  }
}

class { 'resolv': dns_servers_key_merge => true }


I'd also have to code it to selectively use Hiera or not (some people don't) 
and that would get even worse.  The second example of module design may be 
super awesomely flexible in terms of how I can structure my Hiera data, but it 
doesn't fit the direction the community is moving in terms of module design.

-Luke
---

LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN
http://www.LMAX.com/

---
#1 Fastest Growing Tech Company in UK - Sunday Times Tech Track 100 (2014)

Awards
2015 Best FX Trading Venue - ECN/MTF - WSL Institutional Trading Awards
2014 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2014 Best FX Trading Venue - ECN/MTF - WSL Institutional Trading Awards
2014 Best Infrastructure/Technology Initiative - WSL Institutional Trading 
Awards
2013 #15 Fastest Growing Tech Company in UK - Sunday Times Tech Track 100
2013 Best Overall Testing Project - The European Software Testing Awards
2013 Best Margin Sector Platform - Profit & Loss Readers' Choice Awards
2013 Best FX Trading Platform - ECN/MTF - WSL Institutional Trading Awards
2013 Best Executing Venue - Forex Magnates Awards
2011 Best Trading System - Financial Sector Technology Awards
2011 Innovative Programming Framework - Oracle Duke's Choice Awards
---

FX and CFDs are leveraged products that can result in
losses exceeding your deposit. They are not suitable
for everyone so please ensure you fully understand
the risks involved.

This message and its attachments are confidential,
may not be disclosed or used by any person other
than the addressee and are intended only for the
named recipient(s). This message is not intended for
any recipient(s) who based on their nationality,
place of business, domicile or for any other
reason, is/are subject to local laws or regulations
which prohibit the provision of such products and
services. This message is subject to the terms at
http://www.lmax.com/pdf/general-disclaimers.pdf
however if you cannot access these, please notify
us by replying to this email and we will send you
the terms. If you are not the intended recipient,
please notify the sender immediately and delete any
copies of this message.

LMAX Exchange is the trading name of LMAX Limited. LMAX
Limited operates a multilateral trading facility. LMAX
Limited is authorised and regulated by the Financial
Conduct Authority (firm registration number 509778)
and is a company registered in England and Wales
(number 6505809).

LMAX Hong Kong Limited is a wholly-owned subsidiary
of LMAX Limited. LMAX Hong Kong i

Re: [Puppet Users] Node key merging/overloading - node inheritance vs hiera

2015-03-11 Thread Luke Bigum


On Wednesday, March 11, 2015 at 4:35:36 PM UTC, Bostjan Skufca wrote:
>
>
>
>> Something like this seems like I'm telling a module *how* to look up my 
>> own data, rather than passing the right data to the module:
>>
>>
>> class resolv (
>>   $dns_servers_key_name = 'dns_servers',
>>   $dns_servers_key_merge = false,
>> ) {
>>   if ($dns_servers_key_merge) {
>> $dns_servers = hiera_array($dns_servers_key_name)
>>   } else {
>> $dns_servers = hiera($dns_servers_key_name)
>>   }
>> }
>>
>> class { 'resolv': dns_servers_key_merge => true }
>>
>>
>> I'd also have to code it to selectively use Hiera or not (some people 
>> don't) and that would get even worse.  The second example of module design 
>> may be super awesomely flexible in terms of how I can structure my Hiera 
>> data, but it doesn't fit the direction the community is moving in terms of 
>> module design.
>>
>
>
> This is almost what I am looking for. I have an alternate approach: what 
> if merging vs nonmerging is decided based on hiera key?
>
>

That is my approach, that class would do an implicit Hiera lookup for those 
class parameters, I just illustrated the point with a resource-like 
declaration as an example. While the above method would work, I don't think 
I've made my point about not putting this personalised logic in the 
"resolv" module itself. The above example is not so good. Gary Larizza 
explains it very well here if you haven't seen it 
(https://www.youtube.com/watch?v=v9LB-NX4_KQ). That video should answer 
your questions in your second reply to me too, BTW.

The above code example is a bad idea for these reasons:

- the resolv module is tightly coupled to the data, it's in control of how 
it should look up data, rather than just be *given* data
- you won't be able to replace that resolv module with the super awesome 
puppetlabs_resolv module because of your custom way of handling data
- it makes a *very* bad assumption that everyone uses Hiera, it is not 
compatible for people who use ENCs that supply all class parameters for 
example
- there's a higher barrier to entry on understanding the module, some 
people would have to read the body of the resolv module code to figure out 
what's going on (or there would be a long README)
- it's more complicated to test because the range of data it can take is 
more complicated

Now expand on my first example:


class puppetlabs_resolv($dns_servers) {
  file { '/etc/resolv.conf': content => template(...) }
}

class profile::dns_base {
  #lookup my DNS data from Hiera
  $hiera_dns_server_array = hiera_array('dns::server')
  #and add a global DNS server I have
  $common_dns_server = '127.0.0.1'
  class { 'puppetlabs_resolv':
dns_servers => [ $hiera_dns_server_array, $common_dns_server ]
  }
}

class profile::dns_special {
   #don't do a hiera lookup, DNS here is special
   $special_dns = '10.1.1.1'
   class { 'puppetlabs_resolv':
dns_servers => [ $special_dns ]
  }
}

node dc1 { include profile::dns }
node dc1_special { include profile::dns_special }


The puppetlabs_resolv module I downloaded from GitHub does one thing well, 
resolv.conf, in a simple and easily understood manner, and it comes with 
Rspec tests, so I don't have to reinvent the wheel.

All of my business logic about how I get IP addresses into that resolv 
module is in my profile::dns* classes. These are *my* profile classes, I 
can do whatever crazy Hiera lookups and string manipulation I want/need to 
get the data into a format that puppetlabs_resolv takes. In other words my 
profiles are the "glue" between my data and the "building block" 
puppetlabs_resolv module. At any time I can replace puppetlabs_resolv with 
lukebigum_resolv (which is obviously better) with a few tweaks to my 
profiles. If I replace my data backend or get rid of Hiera entirely, my 
profile might have to be adjusted but I don't have to stop using that 
awesome lukebigum_resolv I downloaded.

Why the use of a second profile, profile::dns_special? It takes complexity 
out of Hiera. I don't need a complicated Hierarchy when I've got profiles, 
and I rarely need inheritance at all. I've got my "tpl_%{::domain}" which 
is where my profile::dns looks up data from, and anything that's special is 
actually a different implementation of how I usually do DNS, so it gets 
it's own profile, hence profile::dns_special. It is better to handle these 
exceptions in Puppet code because it's an *actual* language, rather than 
trying to model something complex into Hiera which is just a key-value 
store.

Your Hiera example where you have tpl_dc1.yaml and tpl_dc1-special.yaml is 
going to bite you. Your joke about mimicking node inheritance functionality 
in Hiera worries me a little, because it reminds me of some of my 
colleagues. Just because it can be modelled in Hiera, doesn't mean it 
should be. To give you an example, at my work place we can build an entire 
platform where each node's Hiera file looks li

Re: [Puppet Users] Node key merging/overloading - node inheritance vs hiera

2015-03-11 Thread Luke Bigum

On Wednesday, March 11, 2015 at 1:57:00 PM UTC, Christopher Wood wrote:
>
>
> >Puppet in fact provides three functions functions for lookups: there 
> is 
> >also hiera_hash(). 
> > 
> >In any case, you are quite right.  Which sort of lookup is intended 
> is an 
> >attribute of the data -- part of the definition of each key -- but it 
> is 
> >not represented in or alongside the data.  Each user of the data 
> somehow 
> >has to know.  That could be tolerated, inconvenient as it is, except 
> that 
> >it is incompatible with automated data binding.  This is an issue 
> that has 
> >been recognized and acknowledged, though I'm uncertain whether it is 
> >actively being addressed.  
>
> Could you possibly expound on the "Each user of the data somehow has to 
> know" part? I'm having trouble with the notion that people would use puppet 
> manifests and hiera data without knowing what's in them. 
>
>
>
I can't speak for John but I think I get his meaning, but if I don't, 
here's my own opinion ;-)

If a user of a module is reading that module's documentation and 
parameters, it seems a bit nasty to assume they user must also go read the 
Puppet module code in great detail to find out what type of Hiera call is 
being used.  Passing data to the module should be simply defined, eg: "this 
parameter takes an array" or "this parameter is a comma separated string". 
 For a module to assume that it can or should attempt to do some sort of 
deep merging seems overly complicated and it shifts the focus away from the 
user providing the right data to a well written module. Rather than have 
"classname::merge => true" I would advocate something like this which puts 
the user in complete control of the data reaching it's modules in a correct 
and easily testable manner:


class 'profile::dns' {
  #lookup my DNS data
  $hiera_dns_server_array = hiera_array('dns::server')
  $common_dns_server = '127.0.0.1'
  
  class { 'resolv':
dns_servers => [ $hiera_dns_server_array, $common_dns_server ]
}


Something like this seems like I'm telling a module *how* to look up my own 
data, rather than passing the right data to the module:


class resolv (
  $dns_servers_key_name = 'dns_servers',
  $dns_servers_key_merge = false,
) {
  if ($dns_servers_key_merge) {
$dns_servers = hiera_array($dns_servers_key_name)
  } else {
$dns_servers = hiera($dns_servers_key_name)
  }
}

class { 'resolv': dns_servers_key_merge => true }


I'd also have to code it to selectively use Hiera or not (some people 
don't) and that would get even worse.  The second example of module design 
may be super awesomely flexible in terms of how I can structure my Hiera 
data, but it doesn't fit the direction the community is moving in terms of 
module design.

To answer Bostjan's original example, you have 3 "profiles" of syslog: one 
base, one dc1 and one dc1_special, and you assign those profiles to 
whatever node needs them.

-Luke 

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/f3db0374-7555-402a-affd-3c162de2a4cd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Puppet hangs when hiera data uses hiera lookup

2015-01-13 Thread Luke Bigum
We use recursive Hiera lookups here, works fine for us on Puppet >= 3.7, 
haven't tested anything below that.

If you do "puppet master --compile  --debug" you will get the Hiera debug 
output as well which might narrow down your problem.

--
Luke Bigum
Senior Systems Engineer

Information Systems
Ph: +44 (0) 20 3192 2520

- Original Message -
From: "James Olin Oden" 
To: puppet-users@googlegroups.com
Sent: Tuesday, 13 January, 2015 3:50:34 PM
Subject: Re: [Puppet Users] Puppet hangs when hiera data uses hiera lookup

Cool I will look into.   Concerning support, I got the idea to use it
from the hiera documentation:

   https://docs.puppetlabs.com/hiera/1/variables.html#using-lookup-functions

And as I said with the hiera command line it worked:

$ hiera resolvconf::nameserver -c puppet.yaml ::site=corp
::role=dev ::hwtype=vagrant
["10.135.82.132", "10.35.249.52", "10.35.249.41"]

Such that I think it may be bug.

Thanks for the anchors info though.   Best...James

On Tue, Jan 13, 2015 at 10:47 AM, Stephen Marlow  wrote:
> I can't say I've ever seen nested hiera lookups, and I don't know if they're
> supported.
>
> If these values are all in the same yaml file you might have better luck
> using anchors and references
> (https://en.wikipedia.org/wiki/YAML#Repeated_nodes). I think it would work
> for all of the above entries except for vagrant-dns::search, which I'm not
> sure how to tackle with just YAML.
>
> On Tue, Jan 13, 2015 at 10:09 AM, James Oden  wrote:
>>
>> I had two puppet modules that needed the same data, so I tried to define
>> the data once in hiera and then refer to the first definition via the
>> hiera() function.   When I did this as soon as puppet began loading the
>> module that used one of these variables it hung.I checked using the
>> hiera CLI and it could actually resolve the variables just fine.   When I
>> changed the the hiera file to not use variables but instead copied the
>> actual strings wherever needed it, puppet worked just fine.   Also, I did a
>> mixed solution where one of the modules had static data and the other did
>> the hiera lookup, and it got further but hung on the module that was doing
>> the lookups.
>>
>> Here is the hiera data I'm referring too:
>>
>>dns1:'10.135.82.132'
>>dns2:'10.35.249.52'
>>dns3:'10.35.249.41'
>>dns_search1: 'us.blah.com'
>>dns_search2: 'labs.nc.blather.com'
>>resolvconf::nameserver:
>>- "%{hiera('dns1')}"
>>- "%{hiera('dns2')}"
>>- "%{hiera('dns3')}"
>>resolvconf::search:
>>- "%{hiera('dns_search1')}"
>>- "%{hiera('dns_search2')}"
>>vagrant-dns::device: 'enp0s3'
>>vagrant-dns::dns1:   "%{hiera('dns1')}"
>>vagrant-dns::dns2:   "%{hiera('dns2')}"
>>vagrant-dns::dns3:   "%{hiera('dns3')}"
>>vagrant-dns::search: "%{hiera('dns_search1')} %{hiera('dns_search2')}"
>>
>> I am using puppet 3.4.2.
>>
>> Is this  a known problem?   Am I doing something stupid?   I tried
>> googling for this but came up nil.
>>
>> Thanks...James
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Puppet Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to puppet-users+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/puppet-users/6c647668-c31d-4952-9349-4f7011b44def%40googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/CALGSqjLkGNKahe9rT_7mNAMAnqk4cuSMLkZ65HQDeJnzDGQiBw%40mail.gmail.com.
>
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.

[Puppet Users] Re: creating hashes from other hashes

2014-11-07 Thread Luke Bigum
Huh, at first glance that to me looks like a parser bug. Now that I think 
more on it I seem to recall this coming up before. The $name of a Defined 
Type is not of type String, and Puppet Hash keys are always strings, 
according to the docs:

https://docs.puppetlabs.com/puppet/latest/reference/lang_datatypes.html#hashes

This code works, explicitly enclosing $name in a string:



define foo::bar (
   $stuff = {},
) {
   $new_hash = {"$name" => $stuff}
}

class foo {
   foo::bar { 'somevalue':
 stuff => {
   'one' => 'doesnt_matter',
   'two' => 'doesnt_matter',
 }
   }
}

include foo


On Thursday, November 6, 2014 10:04:00 PM UTC, Tim.Mooney wrote:
>
>
> All- 
>
> We're using puppet (opensource) 3.4.2 master and clients.  We've been 
> using puppet a few years, including create_resources, but this is my 
> first foray into creating complicated nested hashes. 
>
> I've boiled the problem I'm running into down to this example: 
>
> $ cat /tmp/foo.pp 
> class foo { 
>foo::bar { 'somevalue': 
>  stuff => { 
>'one' => 'doesnt_matter', 
>'two' => 'doesnt_matter', 
>  } 
>} 
> } 
>
> define foo::bar ( 
>$stuff = {}, 
> ) { 
>
># 
># not valid: fails with a parser validation error on the key $name: 
># 
># Error: Could not parse for environment production: Syntax error at 
># 'name'; 
># expected '}' at /tmp/foo.pp:21 
># 
>$new_hash = { 
>  $name => $stuff, 
>} 
>
># 
># this works, using a constant key 
># 
>$new_hash = { 
>  'a_constant' => $stuff, 
>} 
> } 
>
>
> This comes from a larger, more complicated example, but what I'm trying to 
> do is 
>
> - take a hash ($stuff) that has all the parameters I need 
> - create a new hash with a single key that's the $name/$title for the 
> define, 
>and a value that contains the hash $stuff that I was passed. 
>
> As you might guess, this is to make $new_hash suitable for passing 
> to create_resources. 
>
> Is there some other way to create a new hash, give it a single top-level 
> key that is a variable, and assign a separate (passed-in as a parameter) 
> hash as the value for that key?  I would be fine with using stdlib::merge, 
> but I don't see any obvious way to accomplish this task with stdlib::merge 
> either. 
>
> Thanks, 
>
> Tim 
> -- 
> Tim Mooney tim.m...@ndsu.edu 
>  
> Enterprise Computing & Infrastructure  701-231-1076 
> (Voice) 
> Room 242-J6, Quentin Burdick Building  701-231-8541 (Fax) 
> North Dakota State University, Fargo, ND 58105-5164 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/55af70aa-0917-4f89-8431-420148c27279%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: puppetdb missing environment fact

2013-12-04 Thread Luke Bigum
'environment' is not a Fact:

laptop:~$ sudo facter -p environment
laptop:~$ 

It is a configuration parameter of Puppet. I'm not sure why older 2.7 hosts 
would be reporting it as a Fact to PuppetDB, unless in 2.7 all top scope 
variables were sent this way.

You could use a Fact to pull out what environment an Agent *would* run with 
using this Fact code:

#Get the configured environment out of puppet.conf
begin
puppet_environment = ''

File.open('/etc/puppet/puppet.conf').each do |line|
if line =~ /^\s*environment\s*=\s*(\S+)/
puppet_environment = $1
end
end

Facter.add(:puppet_environment) do
setcode do
puppet_environment
end
end
rescue
Facter.warn("puppet_environment.rb failed: #{$!}")
end


On Wednesday, December 4, 2013 10:04:53 AM UTC, james.e...@fasthosts.com 
wrote:
>
> Hi,
>
> I'm seeing something rather strange with puppetdb (1.5.2) in regards to 
> the environment fact.
>
> On my puppetdb host:
>
> If I run the following query:
>
> curl -G 'http://localhost:8080/v3/facts' --data-urlencode 'query=["=", 
> "name", "environment"]'
>
> I would expect to receive the environment fact for every node that I'm 
> managing with puppet (>500).
>
> However, that query only returns 11 nodes.  These 11 nodes are running 
> puppet 2.7.22.
>
> I am in the process of upgrading puppet to 3.3.2 from 2.7.22.
>
> All of the nodes running 3.3.2 are missing the environment fact from 
> puppetdb.  All the 2.7.22 nodes have the environment fact stored.
>
> Can anyone think of a reason why the environment fact is missing for my 
> 3.3.2 nodes?
>
> J
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/8e47c15c-9395-4a74-8e61-da15ad433688%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: MySQL server install with datadir != /var/lib/mysql

2013-12-04 Thread Luke Bigum
It should be theoretically possible. The mysql-server package owns 
/var/lib/mysql, but it is the mysql_install_db script that sets up an empty 
database in $datadir when the service starts if $datadir is empty. If you 
update your config file before you start the mysql server, you should be 
able to point it at any datadir you like. It will leave an empty directory 
at /var/lib/mysql, but hopefully you are ok with that as it's owned by an 
RPM.

I had a quick look at the module, /var/lib/mysql is hard coded in a lot of 
places and you'd have to override / set most of them as well as in my.cnf:

$ grep -R '/var/lib/mysql' *
manifests/params.pp:  $datadir   = '/var/lib/mysql'
manifests/params.pp:  $socket= 
'/var/lib/mysql/mysql.sock'
manifests/params.pp:  $datadir   = '/var/lib/mysql'
manifests/params.pp:/(SLES|SLED)/=> 
'/var/lib/mysql/mysql.sock',
manifests/params.pp:/(SLES|SLED)/=> 
'/var/lib/mysql/mysqld.pid',
manifests/params.pp:  $datadir  = '/var/lib/mysql'
manifests/params.pp:  $datadir   = '/var/lib/mysql'
manifests/params.pp:  $socket= 
'/var/lib/mysql/mysql.sock'


On Wednesday, December 4, 2013 12:30:31 PM UTC, Walter Heck wrote:
>
> Tried and failed. The problem is that the mysql package automatically uses 
> /var/lib/mysql, so the right sequence is:
>
> 1) yum install mysql-server
> 2) service mysqld stop
> 3) adjust my.cnf
> 4) make moves on filesystem if needed
> 5) service mysqld start
> (steps 2 and 3 can be reversed)
>
> This is hard to puppetise as it is only needed the very first time when 
> mysql is not yet installed. I've only ever had to do this on smaller groups 
> of servers at a time, so always resorted to doing it manually.
>
> let me know if you figure it out, would be great to see a solution in 
> puppetlabs-mysql.
>
>
> On Wednesday, 4 December 2013 02:39:50 UTC+1, Thomas wrote:
>>
>> Has anybody sucessfully used puppetlabs-mysql (or some other method) to 
>> install MySQL-server on Linux with a my.cnf where datadir != /var/lib/mysql 
>> ?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/fa6d9246-47c6-407a-a4e5-758cf1621759%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: Hiera vs OpenLDAP

2013-10-30 Thread Luke Bigum
This one perhaps?

https://github.com/hunner/hiera-ldap

The example is for Users, doesn'tlook difficult to adapt the search to get 
a list of servers. How you model the classes and class parameters in LDAP 
might be trickier. Maybe your LDAP structure would look something like this 
(which doesn't require much schema):

cn=nodename,ou=nodes,dc=example,dc=com
cn=classname,cn=nodename,ou=nodes,dc=example,dc=com
cn=classparameter1,cn=classname,cn=nodename,ou=nodes,dc=example,dc=com
value=woof

On Wednesday, October 30, 2013 4:53:29 AM UTC, Steven Jonthen wrote:
>
> Hi guys,
>
> I want to use Hiera with a OpenLDAP-Backend. The OpenLDAP-Backend should 
> contain class parameters. When a agent connects to the puppet master then 
> hiera should extract from the OpenLDAP-Backend which roles and which 
> class-parameters the node has. I've found any useful example in the 
> internet, how to integrate OpenLDAP into Puppet and howto create and use 
> the data. 
>
> Can anyone help me?
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/1497c323-e6bb-46f8-ae26-5b785e89b6e4%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: hiera-gpg, CentOS6 and puppet 3.2.4

2013-09-04 Thread Luke Bigum
On Tuesday, September 3, 2013 10:57:39 PM UTC+1, Worker Bee wrote:

> Has anyone been able to get this working?
>

I use those very same versions and it works so don't despair, it took me 
three separate attempts to get it working over the course of a few months - 
my tripfall was GPG keys though ;-)

>
> # Class: testdecry
> #
>
> # [Remember: No empty lines between comments and class definition]
> class testdecry {
> $env = 'live'
> $pass = hiera("rootpwd")
> notify{"The value is: ${pass}":}
> }
>
> 
> Running via puppet fails
> [root@me]# puppet agent --test
> Info: Retrieving plugin
> Error: Could not retrieve catalog from remote server: Error 400 on SERVER: 
> can't convert nil into String at 
> /etc/puppet/modules/testdecry/manifests/init.pp:17 on node me.net
> Warning: Not using cache on failed catalog
> Error: Could not retrieve catalog; skipping run
>

That doesn't look like a Hiera specific error, unless somehow you are 
trying to query the key 'nil', it looks more like a generic Puppet parser 
error... Can you paste your full testdecry manifest, your example above 
doesn't have 17 lines so hard to tell what the problem might be.

This might also help: 

puppet master --compile me.net --debug

Save the output of this command and find your error (there'll be a lot of 
debug information as it will debug every Hiera call).

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Puppet Users] Hiera and hiera-gpg

2013-09-03 Thread Luke Bigum

>
>
> ... That's not explained very well but I can't think of a better way to 
> phrase it yet. Does that help so far?
>

Perhaps I can show you what I mean. Run these commands and look at the 
debug output in what files Hiera is trying to open, see how it's 
interpreting each variable you add on the command line as new sub 
directories of your hieradata directory, based on how you use the %{env} 
%{location} and %{calling_module} variables in hiera.yaml.

hiera -c /etc/puppet/hiera.yaml rootpwd calling_module=motd --debug
hiera -c /etc/puppet/hiera.yaml rootpwd calling_module=motd env=live --debug
hiera -c /etc/puppet/hiera.yaml rootpwd calling_module=motd env=live 
location=woofwoof --debug

Once you understand that, you've got to get those variables into your 
Puppet manifest before the hiera() function call. This is a very very very 
bad example, but it shows how you need to have those variables present in 
the manifest for Hiera to use them in a lookup:

class motd {
  $env = 'live'
  #$calling_module --- should be an automatic variable given to you by 
Puppet's hiera() function call
  $location = ''
  rootpwd = hiera('rootpwd')
}

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Puppet Users] Hiera and hiera-gpg

2013-09-03 Thread Luke Bigum
I just started a big reply to your last email and it looks like you've 
figured most of it out. At least your not still thinking "manifests" your 
problem is in hiera.yaml ;-)

On Tuesday, September 3, 2013 5:04:19 PM UTC+1, Worker Bee wrote:
>
> I am pretty sure I still have something wrong with my set up but, I just 
> cannot seem to see what it is...
>
> Notice if I attempt to decrypt vi the command line and do not indicate 
> "env=live",  it fails..
> [root@me puppet]# hiera -c /etc/puppet/hiera.yaml rootpwd 
> calling_module=motd
> nil
> [root@me puppet]# hiera -c /etc/puppet/hiera.yaml rootpwd 
> calling_module=motd env=live
> rootpass
>
>
The reason that works is written in your hiera.yaml config below. You've 
told Hiera that your Hierarchy contains the variable %{env}. Now while that 
works fine on the command line, when the Hiera function is called during 
catalog compilation in a manifest I'm betting that the 'env' variable does 
not exist, which is why your key is not found. What is %{env}? Did you copy 
it straight from Craig's blog or do you actually use it in your Hierarchy?

>From the way you've got your Hierarchy specified now, if I ran a find 
across your hieradata directory, this is what I'd expect to find:

/etc/puppet/hieradata/some_env/some_location/some_calling_module.yaml
/etc/puppet/hieradata/some_env/some_location/some_calling_module.gpg
/etc/puppet/hieradata/some_env/some_calling_module.yaml
/etc/puppet/hieradata/some_env/some_calling_module.gpg
/etc/puppet/hieradata/common.yaml
/etc/puppet/hieradata/common.gpg

The hierarchy you've got must match the path of the Hiera data files in 
that directory.

When run from the command line, the %{env}, %{location} and 
%{calling_module} variables are passed on the command line. When the hiera 
function call is made during a Puppet catalog compilation then those 
variables must be defined for that node ($env, $location, but 
$calling_module is implicit), either as Facter Facts or as normal variables 
in a Puppet manifest.

... That's not explained very well but I can't think of a better way to 
phrase it yet. Does that help so far?
 

>
> 
> [root@me puppet]# more hiera.yaml
> ---
> :backends: - yaml
>- gpg
>
> :logger: console
>
> :hierarchy: - %{env}/%{location}/%{calling_module}
> - %{env}/%{calling_module}
> - common
>  
>
> :yaml:
>:datadir: /etc/puppet/hieradata
>
> :gpg:
>:datadir: /etc/puppet/hieradata
>
> _
> my encrypted files are in /etc/puppet/hieradata/live
>
>
>
> Thanks in advance for any help!
> Bee 
>
>
> On Tue, Sep 3, 2013 at 11:38 AM, Worker Bee 
> > wrote:
>
>> Hi Guys;
>>  
>> I really appreciate your help and apologize for the continued 
>> questions... however, apaprently, I am missing something here.  I cannot 
>> get this working.
>>  
>> I have set hiera-gpg up as per the docs I can find but, I still cannot 
>> seem to get my manifests correct.  If someone would kindly provide a smaple 
>> manifest, I would be grateful!
>>  
>> Also, per Craig Dunn's blog, he is placing hieradata files in 
>> /etc/puppet/hieradata/live.  Is the "live" subdir required?  Is there some 
>> sort of environment limitation that requires the files live in this subdir?
>>  
>> Thank you very much!
>> Bee
>>
>> On Fri, Aug 30, 2013 at 1:31 PM, Rich Burroughs 
>> 
>> > wrote:
>>
>>>  Your manifests look the same. You do a hiera lookup just as you would 
>>> if you weren't using the GPG integration. It's just another data store for 
>>> hiera.
>>>
>>> You do need to set that up, as other people have mentioned. But it's no 
>>> different in the manifests.
>>>  
>>>
>>> On Fri, Aug 30, 2013 at 6:30 AM, Worker Bee 
>>> > wrote:
>>>
 I am looking for some manifest examples, if anyone has any to share! 


 On Fri, Aug 30, 2013 at 7:16 AM, Richard Clark 
 
 > wrote:

>  On Thu, Aug 29, 2013 at 05:47:41PM -0400, Worker Bee wrote:
> > I am having a bit of difficulty implementing hiera-gpg; particularly 
> with
> > accomplishing the deencryption in my manifests.  Can anyone either 
> provide
> > a simple example or point me to a good resource?  I have searched 
> alot and
> > am still struggling.
> >
> > Any help would be very appreciated!
> >
> > Thanks!
> > Bee
>
> You just need to have the hiera-gpg gem installed, make sure that gpg 
> is
> listed in the backends array in hiera.yaml, then the puppet user needs
> to have the private key configured within it's $HOME/.gnupg -where 
> $HOME
> is usually /var/lib/puppet.
>
> By default pgp keys are encrypted with a passphrase, which would need 
> to
> be supplied and held in a running keyring for that user, so was
> previously working around this by using a non-passphrase protected
> subkey.
>
>

Re: [Puppet Users] does PuppetDB expire resource parameters?

2013-08-08 Thread Luke Bigum


On Thursday, August 8, 2013 2:14:33 PM UTC+1, Ken Barber wrote:
>
> > I think that's just me being too sensorship heavy and abusing copy and 
> > paste, I would have copied some fields from the same example. Trust me 
> that 
> > the resources dictionary was empty though ;-) 
>
> So just to clarify, the resources hash 
> '8ba4379c364b9dba9d18836ef52ce5f4f82d0468' was different or the same 
> between the two examples? 
>

Actually they are the same, my copy and paste skills remain rock solid for 
another day.

I found some more broken resources belonging to some dev servers with a 
handy jgrep:

curl -H 'Accept: application/json' -X GET 
'https://puppet:8081/v2/resources' --cacert 
/var/lib/puppet/ssl/ca/ca_crt.pem --cert 
/var/lib/puppet/ssl/certs/puppet.pem  --key 
/var/lib/puppet/ssl/private_keys/puppet.pem --data-urlencode 'query=["=", 
"type", "Nagios::Config::Host"]' | jgrep "parameters.host_alias=null"

This is the hostname redacted JSON before:

***
[
  {
"type": "Nagios::Config::Host",
"tags": [
  "nagios::host",
  "default",
  "node",
  "config",
  "nagios::config::host",
  "hostname",
  "en1",
  "host",
  "nagios::host::host",
  "undef",
  "class",
  "nagios"
],
"parameters": {
},
"certname": "hostname",
"title": "hostname",
"resource": "3368824b20c1eb7052952f574bb5547ca0c95a50",
"sourcefile": 
"/etc/puppet/environments/production/modules/nagios/manifests/host/host.pp",
"sourceline": 27,
"exported": true
  }
]
***

And after a Puppet run to refresh the catalog:


***
[
  {
"type": "Nagios::Config::Host",
"sourceline": 27,
"certname": "hostname",
"resource": "3368824b20c1eb7052952f574bb5547ca0c95a50",
"exported": true,
"title": "hostname",
"tags": [
  "nagios::host",
  "node",
  "config",
  "nagios::config::host",
  "hostname",
  "en1",
  "host",
  "nagios::host::host",
  "undef",
  "base",
  "class",
  "nagios"
],
"parameters": {
  "host_alias": "hostname",
  "tag": "en1",
  "address": "hostname"
},
"sourcefile": 
"/etc/puppet/environments/production/modules/nagios/manifests/host/host.pp"
  }
]
***

So there's 12 resources with this problem remaining now.

-Luke

> Now if I was thinking smart I would have taken a Postgres backup before I 
> > re-freshed all the catalogs, but I didn't, not sure if that would have 
> > helped much. I agree with subsequent posts as well - probably not a 
> > migration problem. 
>
> It might have helped. Are any other nodes and resources still 
> exhibiting this strange behaviour? Maybe checking for any exported 
> resources with no params might be worthwhile. 
>
> ken. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [Puppet Users] does PuppetDB expire resource parameters?

2013-08-08 Thread Luke Bigum

On Thursday, August 8, 2013 12:48:03 PM UTC+1, Ken Barber wrote:
>
>
> No good idea yet, but there is something suspicious in your curl 
> responses - the "resource" hash, did you obfuscate this yourself on 
> purpose? The two hashes between the first and second requests are 
> identical. That hash is calculated based on the sum of the resource, 
> including parameters - so it seems impossible that PuppetDB arrived at 
> the same hash with and without parameters. 
>

I think that's just me being too sensorship heavy and abusing copy and 
paste, I would have copied some fields from the same example. Trust me that 
the resources dictionary was empty though ;-)
 

> Maybe just maybe the responses were identical, and somehow PuppetDB 
> was not returning parameters as a fault. This might indicate some sort 
> of integrity problem, whereby the link to the parameters in the RDBMS 
> was lost somehow, although this is the first time I've heard of it 
> Luke. Maybe this was an upgrade schema change failure between 1.0.1 
> and 1.3.2?


We thought of that, however we upgraded 3 weeks ago and I saw working 
reports in my Dashboard on the Nagios server up until Friday last week, so 
it was able to collect valid resources for a long while after the DB 
migration. What happened on Friday though was I basically had this running:
the database
mco rpc shellout cmd cmd="puppet agent --test --noop" -T mcollective

Then I realised we had broken manifests everywhere, so I fixed it and 
started MC again. From that point onward the Nagios server was unable to 
collect as it was getting these dodgy resources from PuppetDB. What doesn't 
make sense is my subsequent site wide no-op should have replaced every 
manifest in PuppetDB, so I'm stumped.

I'd have to consult what changed in the schema between 
> those two points to determine if this is likely however. Does the 
> timing of your upgrade, and the first time you saw this fault line up 
> with such a possibility? Remember a schema change will only occur 
> after a restart of PuppetDB ... (so perhaps consult your logs to see 
> when this happened after upgrade). 
>

Just checked - PuppetDB restarted at the same time the RPMs were upgraded 
and hasn't restarted since.

Let me at least try to replicate while I await your responses. 
>

Now if I was thinking smart I would have taken a Postgres backup before I 
re-freshed all the catalogs, but I didn't, not sure if that would have 
helped much. I agree with subsequent posts as well - probably not a 
migration problem.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.




[Puppet Users] does PuppetDB expire resource parameters?

2013-08-08 Thread Luke Bigum
Hi all,

We've come across a rather strange problem where the parameters of some 
resources in PuppetDB are now empty.

We have a Nagios server collecting resources from PuppetDB and we've 
started to get failures like this for one resource type:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: 
Must pass host_alias to Nagios::Config::Host[hostname] on node nagiosserver

The Puppet manifest that defines that resources is this, it is impossible 
to not populate host_alias:

***
define nagios::host::host($host = $::fqdn, $tag = undef) {
  @@nagios::config::host { $host:
host_alias => $host,
address=> $host,
tag=> $use_tag,
  }
}
***

If we query PuppetDB directly (redacted), there are indeed no parameters at 
all on this resource:

***
# curl -H 'Accept: application/json' -X GET 
'https://puppet:8081/v2/resources' --cacert 
/var/lib/puppet/ssl/ca/ca_crt.pem --cert 
/var/lib/puppet/ssl/certs/puppet.pem  --key 
/var/lib/puppet/ssl/private_keys/puppet.pem --data-urlencode 'query=["=", 
"type", "Nagios::Config::Host"]' | jgrep "certname=hostname"
[
  {
"resource": "8ba4379c364b9dba9d18836ef52ce5f4f82d0468",
"parameters": {
},
"title": "hostname",
"exported": true,
"certname": "hostname",
"type": "Nagios::Config::Host",
"sourceline": 27,
"sourcefile": 
"/etc/puppet/environments/production/modules/nagios/manifests/host/host.pp",
"tags": [
]
  }
]
***

After a Puppet run and a new catalog, this resource now looks normal:

***
# curl -H 'Accept: application/json' -X GET 
'https://puppet:8081/v2/resources' --cacert 
/var/lib/puppet/ssl/ca/ca_crt.pem --cert 
/var/lib/puppet/ssl/certs/puppet.pem  --key 
/var/lib/puppet/ssl/private_keys/puppet.pem --data-urlencode 'query=["=", 
"type", "Nagios::Config::Host"]' | jgrep "certname=hostname"
[
  {
"type": "Nagios::Config::Host",
"sourceline": 27,
"title": "hostname",
"certname": "hostname",
"resource": "8ba4379c364b9dba9d18836ef52ce5f4f82d0468",
"parameters": {
  "address": "hostname",
  "tag": "tag",
  "host_alias": "hostname"
},
"exported": true,
"sourcefile": 
"/etc/puppet/environments/production/modules/nagios/manifests/host/host.pp",
"tags": [
]
  }
]
***

These nodes do not have Puppet run on them regularly. We did upgrade from 
PuppetDB 1.0.1-1.el6.noarch to 1.3.2-1.el6.noarch about 3 weeks ago. We 
don't do any automatic report or node expiry.

This started happening back on 2nd August, just halfway through the day the 
Puppet runs on the Nagios server start failing with this error. Now if I 
think back, at this time I think I had a broken Nagios module and a lot of 
manifests were failing to compile, but I fixed this and re-ran the 
failures, and everything was ok. PuppetDB only stores the last catalog, so 
there's no way a broken catalog could have stayed there, right?

I've fixed this by refreshing the catalog of all nodes in PuppetDB, but 
I've got no idea how it got into this state. Any ideas?

Thanks,

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.




[Puppet Users] Re: hiera can't see a value on a puppet client, but the hiera app on the server can

2013-05-09 Thread Luke Bigum
Hi Campee,

On Thursday, May 9, 2013 3:34:20 AM UTC+1, Campee wrote:

>
> I run puppet and get this error:
>
> err: Could not retrieve catalog from remote server: Error 400 on SERVER: 
> Could not find data item ak_auth_primary in any Hiera data file and no 
> default supplied at /etc/puppet/manifests/site.pp:11 on node 
> tag5-4-qa-sjc.domain.net
>
> on my puppet master server:
>
> $ hiera ak_auth_primary region=northamerica datacenter=sjc environment=qa
>
> Answer: ops1-1-qa-sjc
>
> $ hiera ak_auth_primary region=northamerica datacenter=sjc environment=qa 
> machinetype=tag hostname=tag5-4-qa-sjc
>
>
Can you test Hiera like this (on your Puppet Master), it uses the Facts 
cache of your node, rather than you filling in all the gaps by hand, and 
thus is a more thorough test:

hiera -c /etc/puppet/hiera.yaml -y /var/lib/puppet/yaml/facts/
tag5-4-qa-sjc.domain.net.yaml  ak_auth_primary  --debug

You should get some helpful debug trace through what Hiera is doing and 
what data files it is trying to open, in order:

DEBUG: Thu May 24 13:18:53 + 2012: Hiera JSON backend starting
DEBUG: Thu May 24 13:18:53 + 2012: Looking up key 'ak_auth_primary' in 
JSON backend
DEBUG: Thu May 24 13:18:53 + 2012: Backend datadir for json is an 
Array, multiple data dirs to search
DEBUG: Thu May 24 13:18:53 + 2012: Looking in data dir 
/etc/puppet/private/
DEBUG: Thu May 24 13:18:53 + 2012: Looking at hierarchy source 
tag5-4-qa-sjc.domain.net
DEBUG: Thu May 24 13:18:53 + 2012: Cannot find datafile 
/etc/puppet/private/tag5-4-qa-sjc.domain.net.json, skipping
DEBUG: Thu May 24 13:18:53 + 2012: Looking at hierarchy source common
DEBUG: Thu May 24 13:18:53 + 2012: Cannot find datafile 
/etc/puppet/private/common.json, skipping
DEBUG: Thu May 24 13:18:53 + 2012: Looking at hierarchy source 
tag5-4-qa-sjc.domain.net

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [Puppet Users] Re: Practices: what _not_ to manage with Puppet?

2013-05-04 Thread Luke Bigum
On Saturday, May 4, 2013 12:43:57 PM UTC+1, Martin Langhoff wrote:

> On Fri, May 3, 2013 at 4:43 PM, Schofield > 
> wrote: 
> > Everything else is managed by puppet. 
>
> Do you manage complex network setups (bonding, routing) via puppet? 
> There is a certain degree of chicken-and-egg in that; how do you 
> handle managing configuration without breaking the network that 
> delivers the puppet config to the host? 
>

We have a very generic kickstart that runs Puppet as a final step, and in 
that first Puppet run I have a module that writes out 
/etc/sysconfig/network-script/ files, which includes routes, rules, 
bonding, vlans, bridges, etc. All the information is stored in Hiera. We do 
not use Puppet to restart networking or attempt to fix up any 
discrepancies, someone has to come along and "service network restart". So 
we use Puppet to provision what the networking should look like, but not 
enforce it. This means an Admin can come along and mess around with the 
networking and thus things can deviate from what Puppet says they should be.

However, since all the information is stored in Hiera I can have Puppet 
export out nagios checks that do things along the lines of "this interface 
is not up but it should be" and "this interface does not belong to the bond 
it should".

Do you manage complex disk setups (RAID arrays, DRBD) via Puppet? Any 
> hints as to how? 
>

I haven't tried to manage DRBD but the config should be simple. You're 
going to run into problems if you try to create a DRBD disk across two 
servers at the same time - Puppet can't orchestrate the commands that need 
to be run on each server, for that you would need MCollective and unless 
you were creating 100s of DRBD disks, I wouldn't bother and I'd do it by 
hand.

I do manage iSCSI disks, LVM and file systems in Puppet though. There's a 
manual step where we have to go to our storage appliances and create the 
iSCSI disk first, then put the iSCSI target ID into Hiera, but the rest is 
clockwork. It provisions only, it doesn't attempt to resize or reformat 
file systems if it finds a discrepancy. To counteract that, like the 
networking scripts, I can export nagios checks that say "this file system 
is 30 Gig and ext3, but it's supposed to be 10 Gig and ext4" which tells me 
someone's gone and made on-box changes that aren't back-ported to Puppet / 
Hiera.

Or perhaps you only use Puppet so extensively in VMs, where you don't 
> have to deal with all these pesky issues?
>

I have Puppet create our VMs, which calls our kickstart, which calls Puppet 
;-)

For some tasks we _don't_ use VMs (high perf HA DB servers, asterisk 
> servers are two top examples). I find that managing the config of 
> those boxes is enormously important to retain sanity... 
>

Of course, we use lots of almost-identical VMs for things that are a 
> good fit for VMs (webservers, etc)... 
>
>
>
> m 
> -- 
>  martin@gmail.com  
>  -  ask interesting questions 
>  - don't get distracted with shiny stuff  - working code first 
>  ~ http://docs.moodle.org/en/User:Martin_Langhoff 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[Puppet Users] Re: composite tags broken

2013-03-25 Thread Luke Bigum
On Monday, March 25, 2013 5:25:20 PM UTC, Ellison Marks wrote:

> As far as I can tell, if a provided tag matches any tag in the resource, 
> it will be applied. Since you provide woof and service, file { '/tmp/foo': 
> is run because it is tagged woof, service { 'foo': is run because it is 
> tagged woof and is a service, and file { '/tmp/bar': is run because it is a 
> service.
>

Nope, File[/tmp/bar] is not a Service, it's a file :-) It shouldn't match 
the tags specified if it's treated as a logical AND or a logical OR. 

My Puppet output is bogus though, I pasted the wrong command for that 
manifest (symptom of trying out a bunch of different combinations), this is 
the correct output:

$ puppet apply test.pp --noop --tags "woof,service"
Notice: /Stage[main]//File[/tmp/foo]/ensure: current_value absent, should 
be present (noop)
Notice: /Stage[main]//Service[foo]/ensure: current_value stopped, should be 
running (noop)

However you have got me thinking that the tags behave like a logical OR 
rather than in a composite nature that I originally assumed.

-Luke

On Monday, March 25, 2013 5:35:30 AM UTC-7, Luke Bigum wrote:
>>
>> Hi all,
>>
>> I wanted to check I'm not doing anything wrong before I lodge a bug. I 
>> think composite tags should work according to this doc:
>>
>>
>> http://docs.puppetlabs.com/puppet/3/reference/lang_tags.html#restricting-catalog-runs
>>
>> However I do not get the expected behaviour with my test using Puppet 3:
>>
>> $ puppet apply test.pp --noop --tags "woof,service"
>> Notice: /Stage[main]//File[/tmp/foo]/ensure: current_value absent, should 
>> be present (noop)
>> Notice: /Stage[main]//File[/tmp/bar]/ensure: current_value absent, should 
>> be present (noop)
>> Notice: /Stage[main]//Service[foo]/ensure: current_value stopped, should 
>> be running (noop)
>>
>> For this catalog:
>>
>> file { '/tmp/foo':
>>   ensure => present,
>>   tag=> [ 'woof', 'cow' ],
>> }
>> service { 'foo':
>>   ensure => running,
>>   tag=> [ 'woof', 'cow' ],
>> }
>> file { '/tmp/bar':
>>   ensure => present,
>>   notify => Service['foo'],
>> }
>>
>> It should only work on the service, not everything.
>>
>> Bug, yes?
>>
>> -Luke
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[Puppet Users] composite tags broken

2013-03-25 Thread Luke Bigum
Hi all,

I wanted to check I'm not doing anything wrong before I lodge a bug. I 
think composite tags should work according to this doc:

http://docs.puppetlabs.com/puppet/3/reference/lang_tags.html#restricting-catalog-runs

However I do not get the expected behaviour with my test using Puppet 3:

$ puppet apply test.pp --noop --tags "woof,service"
Notice: /Stage[main]//File[/tmp/foo]/ensure: current_value absent, should 
be present (noop)
Notice: /Stage[main]//File[/tmp/bar]/ensure: current_value absent, should 
be present (noop)
Notice: /Stage[main]//Service[foo]/ensure: current_value stopped, should be 
running (noop)

For this catalog:

file { '/tmp/foo':
  ensure => present,
  tag=> [ 'woof', 'cow' ],
}
service { 'foo':
  ensure => running,
  tag=> [ 'woof', 'cow' ],
}
file { '/tmp/bar':
  ensure => present,
  notify => Service['foo'],
}

It should only work on the service, not everything.

Bug, yes?

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [Puppet Users] Certificate nightmares

2013-02-11 Thread Luke Bigum
On Friday, February 8, 2013 11:58:22 PM UTC, Nick Fagerlund wrote:

> If a brand new never-seen-before agent starts up, it goes like this:
>
> * Do I have a private key? Nope? Better generate one.
> * Okay, do I have a certificate? Nope? See if the master already has one 
> for me. This looks like a GET request to /certificate/.
>   * If it gets one, it's good to go.
> * Master didn't give me a cert. Okay, have I submitted a certificate 
> signing request before? Look in $ssldir/certificate_requests for my own 
> name.
>   * If there's one there, it bails and waits, assuming it's waiting for 
> the master to sign that thing. 
> * Okay, there's nothing there, but maybe I developed amnesia. Better ask 
> the master if I've asked for one. This looks like a GET request to 
> /certificate_request/.
>   * If the master says it's already asked, it will just bail and say "I'm 
> still waiting for that."
> * Okay, I never even asked for a cert, it looks like. Well, time to ask 
> for one. This looks like a PUT request to /certificate_request/.
>   * Now if autosign is turned on, it can GET /certificate/ and 
> continue; otherwise it'll bail and go through this whole process again next 
> time, in which case it says "yes I have a private key, no I don't have a 
> cert" and gets to work on the second step above. 
>

Nick that's a pretty awesome explanation of the handshake and corresponding 
REST calls. Is that written down anywhere official? Perhaps with 
corresponding Puppet Master / Agent log entries?

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [Puppet Users] Referencing a variable from one class in another

2013-01-31 Thread Luke Bigum
On Wednesday, January 30, 2013 9:41:31 PM UTC, Ti Leggett wrote:

>
> On Jan 30, 2013, at 2:33 PM, jcbollinger wrote: 
>
> >> not mash all the public bits in to one globally public class that has 
> no nitty gritty bits to implement. In my example ::params is 
> considered the header for the module (granted a header that exposes values, 
> but that can't be helped due to the declarative nature of the DSL). There 
> should be no implementation in that sub-module and even  should 
> reference that 'header' to get the variables it needs to do its work. But 
> I'll still pay penance at the OO altar for all my past transgressions 
> against and abuses of it. 
> >> 
> > 
> > 
> > Maybe we're just having a terminology problem.  Puppet has no concept of 
> sub-modules, and the only construct available to hold reference-able, 
> non-global variables is the class.   Indeed, even modules themselves are a 
> Puppet implementation detail -- the DSL has no concept of them except as 
> top-level namespaces (but top-level namespaces often map to classes, either 
> instead of or in addition to mapping to modules). 
> > 
> > So, would you care to explain how your ::params then differs 
> from "mash[ing] all the public bits in to one globally public class that 
> has no nitty gritty bits to implement"?  Are you suggesting separate 
> ::params classes shadowing multiple different classes in the same module? 
>  Are you conflating class parameters with class variables? 
> > 
>
> Let's go back to my original example from http://pastie.org/5910079#. Not 
> stated in that code snip (for conciseness) is a module, kibana. Among other 
> things it needs to install an apache configuration to make it a useful 
> piece of software and that configuration is in the kibana::apache sub-class 
> in the form of a snippet that is tagged such that the apache module can 
> instantiate it later on at the proper time (there's an alternative to this 
> method). In order to do this, the kibana::apache class needs to know where 
> to install this configuration file so that the apache process can load it. 
> That location I chose to place in a variable, $config_d, in the apache 
> module in a sub-class (sorry for the improper nomenclature before) 
> apache::params. To solve my original problem I simply add: 
>
> include 'apache::params' 
>

For this specific example I would not model it this way. I would make the 
Apache module provide an interface to allow other modules to add to itself. 
I would make something like an 'apache::vhost' or 
'apache::extra_conf_file', which is just a wrapper around a Puppet File and 
a notify => Service[]. Thus the implementation of how to create/manage 
Apache config files (like their location) is wholly contained in the Apache 
module and other modules don't have to "know" specific details.

That design may not scale out for more complex examples though. Looking at 
my own modules I should really practice what I preach because all too often 
I just drop a file in /etc/httpd/conf.d from anywhere I please ;-)

right above the @file snippet in kibana::apache and everything is happy. I 
> think everyone can agree that is the right solution to the problem that I 
> posed; however, both you and Luke strongly suggested against having modules 
> include other module's variables willy nilly and instead move those 
> variables that multiple modules need to reference into a globally included 
> class, we'll call it globals::. This class, I assume (correct me if I 
> assume wrong), would always be instantiated before all others, say in the 
> nodes.pp file, to ensure that all subsequent modules could resolve the 
> variable appropriately. I would assume in that class you would have all of 
> these 'global' variables that multiple modules make use of so you'd have 
> (eventually) a long list in this one class: 
>
> class globals { 
>   $apache_config_d = '/etc/httpd/conf.d' 
>   $openldap_schema_d = '/etc/openldap/schemas' 
>   $rsyslog_d = '/etc/rsyslog.d' 
>   $ssl_ca_path = '/etc/pki/tls/certs' 
>   ... 
> } 
>
> And in my kibana::apache class I would reference 
> ${globals::apache_config_d}. If all of those assumptions are true, I'm 
> curious why this solution is any better? The only benefit that I can see is 
> that when writing a new module you don't trouble making sure you've loaded 
> the proper prerequisites for variable resolution because it should already 
> be done. You're still including another classes variables, albeit in this 
> only ever one other class. The other alternative that has been alluded to 
> in a pure programmatic way is that none of those variables should be shared 
> between modules and each module should have their own local variable to 
> use. I'll consider this proposed as only for the purist and not really 
> tractable in any real complex environment. 
>
> What I proposed is only semantically different than the above but, in my 
> mind, is a cleaner (you don't have one huge file that has all va

[Puppet Users] Re: How to collect hostnames or host ips

2013-01-29 Thread Luke Bigum
Hi Dusty,

On Tuesday, January 29, 2013 2:30:14 AM UTC, Dusty Doris wrote:
>
> I'd like to be able to collect all the hostnames (fqdn) or ips of certain 
> hosts to be used in setting up firewall rules.  I'd like to search for 
> hosts that have included a particular class, perhaps by simply setting a 
> tag when that resource is included.
>
> eg:
>
> node 'node1' {
>   include 'somewebclass'
> }
>
> class somewebclass {
>   tag 'web'
>   # other stuff
> }
>
>
> Then in another class, I'd like to find all my 'web' hosts and allow them 
> access in a firewall rule.
> eg:
>
> class somedbclass {
>   tag 'db'
>   iptables { "allow db access":
> proto => 'tcp',
> dport => '3306'
> source => Node <| tag == 'web' |>,
> jump => 'ACCEPT'
>   } 
> }
>
> So, ultimately, I'd need that Node <| tag == 'web' |> to be an array of 
> hostnames or ipaddresses.
>
> This is just an example to try to explain what I am doing.  Does anyone 
> know how to do this?  Can I do this in puppet?  Do I need to write my own 
> function to handle this?  Or, can I use something like hiera or puppetdb to 
> do this?
>

Native Puppet doesn't have any such feature. I asked a similar question in 
this thread about a month ago where I was trying bend Exported Resources to 
my will:

https://groups.google.com/forum/?fromgroups=#!searchin/puppet-users/luke$20bigum$20exported/puppet-users/zQgUDx2ixus/XpGFOo6OwvQJ

To save you some reading I would recommend using this module to pull raw 
data from PuppetDB, or something similar:

https://github.com/dalen/puppet-puppetdbquery

>From there you could build your hash/array, then use that in a template or 
to create individual Puppet resources from for your firewall rules.

Hope that helps,

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [Puppet Users] Referencing a variable from one class in another

2013-01-28 Thread Luke Bigum
On Monday, January 28, 2013 5:00:24 PM UTC, Ti Leggett wrote:

>
> Thanks for the response. 
>
> Can multiple classes include the same class. Let's say I instantiate the 
> apache class from manifests/nodes.pp which in turns includes 
> apache::params. Can kibana include apache::params then as well with no 
> conflict. I know you can't do this with the class {} style declarations. 
> Also, I thought the class {} style declarations were the preferred way or 
> is that just in the nodes.pp file?


Yes, they can. That's the main selling point for the "include class" 
syntax. And you are right, you can't use the class {...} syntax more than 
once or you get a duplicate definition error.

However, let me warn you against going overboard with having classes 
include other classes from other modules. It can be annoying to track down 
where resources coming from for any given node if you've got cross module 
inclusion: kibana includes httpd includes mod_ssl includes openssl includes 
somethingelse includes ... How did this get on here?

A cleaner way might be to declare cross module relationships using the 
Arrow operators:

class kibana::apache { 
  Class[apache::params] -> Class[kibana::apache]
  ...
}

And then you make a house rule to have all your classes instantiated in 
your node definitions:

node woof {
  class kibana
  class apache::params
}

If apache::params is missing, you'll get an error saying so. It also fits 
rather nicely into an ENC if you want to go in that direction now / later.

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[Puppet Users] Re: Terrible exported resources performance

2013-01-21 Thread Luke Bigum
Hi Daniel,

On Monday, January 21, 2013 1:05:26 PM UTC, Daniel wrote:
>
> In the larger env it takes about 70 minutes, if it manages to finish at 
> all. Initially, as a "quick" test, I was running puppetdb without postgres 
> and had to give it 5GB to get it to finish at all (70 mins). With postgres 
> 8.4, load on the puppetmaster is significantly reduced, but with 512MB for 
> puppetdb (128 + 1MB per node, and then double it for good measure) puppetdb 
> still runs out of memory. I set it to 1GB and puppedb just crashed again 
> (I've got dumps). Trying with 2GB now. I haven't fiddled with thread 
> settings, but my puppet agents aren't deamonized or 'croned', I run them 
> using mcollective or manually. So there's only a single puppet agent 
> running during this test, on the core nagios server. It seems that there's 
> a ruby process taking 100% of one core during this run and nothing else 
> "dramatic" seems to be happening (except for puppetdb dying of course).
>
>
Given enough RAM it doesn't sound like PuppetDB is the problem any more, is 
that correct?

The Ruby process is most likely a Puppet Master thread doing the catalog 
construction. I think your suffering from a similar problem that we had 
recently, where it's not specifically resource collection that's taking up 
all the time, it's the Puppet Master turning the exported resources 
information into one enormous catalog that takes too long.

We got around this by bypassing exported resources and querying the 
information from PuppetDB directly and using that information in a 
template. I suggested the following to another user a few days ago in this 
thread:

https://groups.google.com/forum/#!topic/puppet-users/X6Lm-0_etbA

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/GOV7Nh_co1EJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Error: Could not retrieve catalog from remote server: execution expired

2013-01-17 Thread Luke Bigum
I'm not sure if there's a way to increase the timeout for exported resource 
reconstruction, however rather than doing a Puppet resource collection you 
can query the raw data from PuppetDB:

https://github.com/dalen/puppet-puppetdbquery

Here is an example a colleague of mine used to vastly speed up the catalog 
of our Nagios server. Here it queries exported 'hostgroup_member' resources 
with a specific tag, then uses the returned hash of data in a template to 
define all Nagios hostgroups:

$hostgroup_members = pdbresourcequery(
  [ 'and',
[ '=', 'tag', $nagios::params::sites ],
[ '=', 'type', 'Nagios::Config::Hostgroup_member' ],
[ '=', 'exported', true ]
  ]
)
file { $nagios::params::hostgroups_yaml:
  content => template('nagios/nagios_hostgroups.yaml.erb'),
  notify  => Class['nagios::service'],
}

This cut our catalog down from over 2 minutes in compile/collect time to 
around 20-30 seconds.

Hope that helps,

-Luke

On Wednesday, January 16, 2013 11:35:23 PM UTC, Joshua Buss wrote:
>
> Wow, I just found this by googling for the error message and I'm getting 
> the exact same problem.. unable to run puppet agent on the same machine 
> where I have the puppet master running.. times out on loading plugin.  I'm 
> running on ubuntu server 11.10, version  2.7.1-1ubuntu3.7
>
> On Monday, January 7, 2013 1:19:41 PM UTC-6, Rob Smith wrote:
>>
>> Hi everyone,
>>
>> I recently ran into an issue where my puppetmaster can't run puppet on 
>> itself. It errors out with the following:
>> Error: Could not retrieve catalog from remote server: execution expired
>> Warning: Not using cache on failed catalog
>> Error: Could not retrieve catalog; skipping run
>>
>> I'm running Puppet 3 with passanger and puppetdb (hsql). I've tried 
>> restarting puppetdb and apache to no effect. If I wipe out puppetdb, it'll 
>> work again until all 17 servers are back into the catalog and it times out 
>> from then on. The puppet master is also my nagios node so it does have a 
>> huge amount of resources to assemble.
>>
>> Can I configure puppet to wait longer for the catalog generation step? 
>> I've search the docs without anything standing out to me.
>>
>> Thanks,
>> ~Rob
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/ffFFsG4HM0YJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Question on defines.

2013-01-09 Thread Luke Bigum
Hi James,

On Tuesday, January 8, 2013 11:19:13 PM UTC, jdehnert wrote:
>
> I want to pass a few variables through to the other files in a module.  I 
> have a define statement that sets one default...
>
> define redis::install ( $port = 6397, $version )
>
> What I am unclear on is how far does this define reach in my module?  For 
> instance, do I need to have everything that uses these variables included 
> in the same file as the define, or if I put the define in init.pp can I use 
> those variables in the config.pp file?
>
>
I'm not quite sure what you mean by "how far does the define reach in my 
module". Those variables in the example above are parameters to the define 
itself and can be used inside that define, you can't refer to those 
variables from another container like you can a variable in a class.

If you reference a variable outside the current scope you need to make sure 
that the container the variable is in is declared in the manifest for this 
node. In short, if you are using $redis::params::version you need to ensure 
you "include redis::params" or similar.

To demonstrate, run the following code with "puppet apply ":

define test($var = 'woof') {
  include dog
}

class dog {
  if $var == undef {
warning("var = undef")
  } else {
warning("var = '$var'")
  }
}

test { "meow": }

#try reference the define variable
warning("define var = '${test::var}'")


class one {
  $foo = 'bar'
  include two
}

class two {
  warning("foo = '${foo}'")
}

include one


-Luke
 

> --
> Thanks,
>   James "Zeke" Dehnert
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/gow6N8iiMtMJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Dynamic Environments and Hiera

2013-01-09 Thread Luke Bigum
Hi Brad,

On Tuesday, January 8, 2013 10:30:11 PM UTC, Brad Ison wrote:
>
> Hi, 
>
> I've been using dynamic environments, one per Git branch, similar to 
> what's described here: 
>
>   http://puppetlabs.com/blog/git-workflow-and-puppet-environments/ 
>
> I've come to really like that workflow, but I'm struggling with how 
> best to integrate it with Hiera. In addition to short lived dynamic 
> branches, I'll have some longer lived ones that feed into master 
> (production), e.g. staging, dev, etc. 
>

I am using the very same workflow.
 

>
> My hierarchy has traditionally looked something like this: 
>
>   - 'environments/%{environment}/%{location}', 
>   - 'environments/%{environment}', 
>   - 'global' 
>
> What's the best way of having new environments pick data from the 
> right spot in the hierarchy without having to cram everything into the 
> global / common root? 
>

Your hierarchy data store is outside your environment, here is what mine 
looks like:

:backends:
  - yaml
:hierarchy:
  - %{fqdn}
  - %{role}_role
  - %{pop}
  - global
:yaml:
  :datadir: /etc/puppet/environments/%{environment}/hiera/

So if I push a new feature to branch new_feature, I get Puppet environment 
"new_feature" which has it's own copy of the Hiera data store with all my 
new_feature related Hiera keys in it. When it comes to environments my data 
follows the same branches and "versions" of my code, and when I merge code 
into my main line production branch the matching Hiera keys go along with 
it.

Hope that helps,

-Luke


> For example, if I branch off of dev, creating a new environment called 
> 'new_feature', only 'global' would be in scope unless I explicitly 
> copy the dev data to 'new_feature.yaml', which feels wrong. 
>
>
> Am I approaching this all wrong? Any advice? 
>
> -- 
> Thanks, 
> Brad 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/4d_axfRmI-QJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: inspect resources that are already added to a manifest

2013-01-03 Thread Luke Bigum
On Wednesday, January 2, 2013 3:51:37 PM UTC, jcbollinger wrote:

>
>
> On Saturday, December 22, 2012 12:20:10 PM UTC-6, Luke Bigum wrote:
>>
>> Hi all,
>>
>> Does anyone know of a way to inspect resources that are already parsed in 
>> a node's manifest during catalog compilation? This would certainly need 
>> some serious Ruby Fu.
>>
>
>
> This is a bad idea.  If your the Puppet circuits in your brain didn't trip 
> over "inspect", they certainly should have sounded the alarm over "serious 
> Ruby Fu".  You are fighting against the tool.
>
 
>
>>
>> As an example, imagine I have a number of arbitrary files defined by 
>> multiple classes and it is impossible to get an all encompassing list of 
>> these files:
>>
>> file { 'woof': }
>> file { 'cows': }
>> file { 'meow': }
>> ...
>> $all_files = inline_template(...)
>>
>> I would like to be able to gather those file names into a Puppet variable 
>> - this would be parse order dependent. It would be fantastic if it could 
>> handle exported resources that have just been collected as well.
>>
>
>
> And "parse-order dependent"?  Of course it is.  You need a 
> Puppet-bogometer.
>
> So what configuration objective are you actually trying to accomplish 
> here?  There is likely a more robust, less Rubyriffic way to accomplish it.
>
>
Ohh don't worry, John, my bogometer was going off like crazy, the needle 
almost broke ;-)

I'm taking shortcuts in my spare time with a tool that's probably 70% right 
for the job. It's for monitoring - I really like the idea of a Puppet 
module to describe or advertise how to monitor itself, it keeps them very 
self contained.

Just a bit more on this - I generally see three categories of monitoring 
tools. Ones that are configured separately from your CRM and end up being a 
source of truth on their own are in my mind the worst. The next level up 
are ones either defined from or derived from your CRM. The best are 
auto-discovery, but they cost an absolute fortune. I'm trying to move my 
team from the first one to the second one with as little "new tools" as 
possible, which is where the "70% right for the job" comment comes from.

I'm using exported resources to describe how modules are monitored. The 
problem is that exported resources are not the equivalent of raw 
information passing. So when I want to start doing trickier things like 
group and analyse what is collected, exported resources don't cut it 
because it's not what they are designed for.

Specifically what I was trying to do was collect exported resources of the 
same type and group them on the monitoring server. There is no predefined 
list of service names anywhere (unless you parse the node definitions) so 
that's why I wanted to go from resource collection to Array of Strings. A 
colleague has managed to reduce my 300 lines to 50 though so the need for 
craziness is reduced somewhat. We still need to do the "Export a File" 
trick and run a script on the monitoring server to build the complex 
configuration that exported resources are not designed to handle.

The next iteration of this work might be to scrap resource collection in 
favour of querying PuppetDB directly to figure out what to monitor, but 
that's a lot more work than I'm prepared to do at this stage. Maybe in a 
few months... ;-)

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/OLpl0Bx1q5kJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Trying to use a facter information in manifest.

2012-12-28 Thread Luke Bigum
On Thursday, December 27, 2012 11:12:36 PM UTC, JGonza1 wrote:

> I am trying to use information that facter gathers on the agent server in 
> the manifest. I am trying to use "domain => dev.com" depending on what 
> domain is I deploy the file. I ran the manifest and it did not give me an 
> error but it did not fdeploy the file. My code is below. 
> In my files directory for this manifest I have these files
> aliases
> submit.cf.dev.com
> submit.cf.test.com
>  
>

What you're trying to do is sensible.

The first thing I do with these sorts of problems is work backwards from 
the Agent - is the Agent trying to manage this resource? The last run 
should store a list of classes and resources in these files:

$ grep submit /var/lib/puppet/state/resources.txt
$ grep submitcf /var/lib/puppet/state/classes.txt

Or you can run the Agent with --evaltrace and it will print out each 
resource it tries to manage - check if submit.cf scrolls past:

$ puppet agent --test --evaltrace

If submit.cf is not there, then the Puppet Master does not think that your 
node should be managing those resources and the problem is in manifest 
compilation. Make sure this node's definition is declaring the correct 
submitcf classes.

If you want to, you can run a catalog compilation in debug mode. It won't 
help you with this specific problem but it's useful to know:

$ puppet master --compile  --debug

This can be a bit complicated to read as you'll get a deluge of debug 
messages while the catalog is being compiled, then a big dump of JSON which 
is the catalog itself.

Lastly, I would rewrite your classes below like this:

class sendmailnew {
  package { "sendmail":
ensure => installed,
notify  => Service["sendmail"],
  }

  service { "sendmail":
ensure => running,
enable => true,
has_restart => true,
has_status => true,
require => [ Package["sendmail"], File["/etc/mail/submit.cf"], ],
  }

  file { "/etc/mail/aliases":
  ensure => file,
  source => "puppet:///modules/sendmailnew/aliases",
  owner => root,
  group => root,
  mode => 644;
  notify => Exec["mailaliases"],
  }

  exec { "mailaliases":
   command => "/usr/bin/newaliases",
   refreshonly => true,
   notify => Service["sendmail"],
  }

file { "/etc/mail/submit.cf",
  ensure => file,
  owner => root,
  group => root,
  mode => 644,
  source => "puppet:///modules/sendmailnew/submit.cf.$domain";
  require => Package["sendmail"],
  notify => Service["sendmail"],
}
  }

Have a look at the differences and if you have any questions, please ask.

HTH,

-Luke

MY init.pp file is the one below
> class sendmailnew {
> exec { "mail":
>  command => "/usr/bin/yum -y install sendmail",
>   }
>   exec { "restart":
>command => "/etc/init.d/sendmail restart",
>   }
> file {
> "/etc/mail/aliases":
>   ensure => file,
>   source => "puppet:///sendmailnew/aliases",
>   owner => root,
>   group => root,
>   mode => 644;
>   }
>   exec { "mailaliases":
>command => "/usr/bin/newaliases",
>   }
> }
> class submitcf ($domain) {
>   file { submit:
>  path => $domain ? {
>  default => "/etc/mail/submit.cf",
>   },
>   ensure => file,
>   owner => root,
>   group => root,
>   mode => 644,
>   source => "puppet:///sendmailnew/submit.cf.$domain";
>   }
> }
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/LE80H2uHAOQJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] inspect resources that are already added to a manifest

2012-12-22 Thread Luke Bigum
Hi all,

Does anyone know of a way to inspect resources that are already parsed in a 
node's manifest during catalog compilation? This would certainly need some 
serious Ruby Fu.

As an example, imagine I have a number of arbitrary files defined by 
multiple classes and it is impossible to get an all encompassing list of 
these files:

file { 'woof': }
file { 'cows': }
file { 'meow': }
...
$all_files = inline_template(...)

I would like to be able to gather those file names into a Puppet variable - 
this would be parse order dependent. It would be fantastic if it could 
handle exported resources that have just been collected as well.

Thanks,

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/SIz5R_2fmNwJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Trying to get complex data set into Puppet from ENC

2012-12-20 Thread Luke Bigum
Hi Jared,

On Wednesday, May 23, 2012 1:10:21 AM UTC+1, Jared Ballou wrote:

> Hi everyone, 
>
> I've been reading the groups here for a while, and have gotten a lot 
> of things fixed by finding other people's posts, so hopefully someone 
> will be able to set me straight. I am working on a Puppet deployment 
> that needs to have a lot of disparate data pulled together, and as far 
> as the ENC I created to pull it all in, everything has worked great. 
> However, I'm running into a problem instantiating Apache virtual 
> hosts. Here is some abridged output from my ENC: 
> --- 
> classes: 
>   app::lamp: 
> appdata: 
>   sites: 
> Some Website: 
>   id: "2" 
>   name: Some Website 
>   servername: somewebsite.com 
>   svntag_prod: trunk 
>   svntag_dev: trunk 
>   documentroot: ~ 
> Another Website: 
>   id: "4" 
>   name: Another Website 
>   servername: anotherwebsite.com 
>   svntag_prod: "1.2.0" 
>   svntag_dev: "1.3.0-rc4" 
>   documentroot: ~ 
> Third Website: 
>   id: "6" 
>   name: Third Website 
>   servername: thirdwebsite.com 
>   svntag_prod: trunk 
>   svntag_dev: trunk 
>   documentroot: "/opt/thirdwebsite/customhtdocs" 
>
> So, I have some other classes that are parameterized and I can 
> reference $appdata[$key] inside those manifests and everything works 
> fine for strings or arrays. My issue is getting this hash of hashes in 
> [appdata][sites] turned into vhosts. I tried using create_resources to 
> no avail,


I use create_resources() a bit with hash data pulled from Hiera and don't 
have too many problems with it (despite it's annoyingly vague error 
messages). Do you have an example of where it has not worked for you? I 
would have thought:

create_resources('apache::vhost', $appdata[$key][$sites])

should work fine, though I don't have an ENC so I'm not sure exactly how 
this data gets presented to you - it is a Puppet hash, right?
 

> tried dumping the ENC to YAML and using Hiera to parse that, 
> and I have struck out in every way. And, honestly, I think there must 
> be a better way to do this. The data is all in a single MySQL table, 
> so I looked at hiera-mysql backend, but I think I am over my head 
> here. Has anyone got a good example I could reference doing something 
> like this, especially for multi-dimensional hashes? I was starting to 
> look at just converting it to JSON or just comma delimited text and 
> feed it to Puppet as a string to be parsed, but that just seems wrong. 
> I've been at this 4 hours now with no luck, any help anyone can 
> provide would be greatly appreciated. 
>
> Thanks, 
>
> -Jared

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/vXuy7hRZeCkJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] check if if file exists on client an master

2012-12-13 Thread Luke Bigum
Romain, I am confused.

In your first post you said you need to check if a package exists on the 
"Agent", the Puppet client.

Now below you say you need the check executed on the Master.

Facts are executed on the Agents and only ever on Agents. If you want to 
check for something on a client/agent machine, you use a Fact, like the one 
you posted below.

If you want to execute arbitrary code on the Master (during catalogue 
compilation) probably the simplest thing you are after is the Generate 
function:

http://docs.puppetlabs.com/references/latest/function.html#generate

On linux, I would use something that looks a bit like this:

class woof {
  $file_exists = generate('/bin/test', '-f', 
'/softw4pc/Misc/pfoleproxy/pfoleproxy*.txt')
  if $file_exists {
  ...
  }
}

There are other ways you can execute arbitrary code, like embedded Ruby 
with the inline_template() function, pure Ruby manifests, or bury the code 
somehow in a custom type and provider.

Does that help?

-Luke

On Thursday, December 13, 2012 6:46:07 AM UTC, Romain Gales wrote:
>
> The facter should be executed on the server instead on the client.
>
>
>
>
> On Thursday, December 13, 2012 1:29:23 AM UTC+1, Jakov Sosic wrote:
>>
>> On 11/28/2012 09:46 PM, Romain Gales wrote: 
>> > there is what i tried: 
>> > 
>> > # getpfoleproxyver.rb 
>> > # 
>> > Facter.add(:getpfoleproxyver) do 
>> >   setcode do 
>> >   Facter::Util::Resolution.exec('basename `ls 
>> > /softw4pc/Misc/pfoleproxy/pfoleproxy*.txt`') 
>> >   end 
>> > end 
>> > 
>> > the fact is working fine, but how to use this in my manifest? 
>> > i tried a lot but it was always empty? 
>> > 
>> > $getpfoleproxyver should be correct, no? 
>>
>> Are you sure it's working on the client? You can see the value when you 
>> type facter -p | grep getpfoleproxyver 
>>
>> ? 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/zrpJgZzE8poJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Converting puppet client to servr

2012-12-13 Thread Luke Bigum
On Wednesday, December 12, 2012 10:35:21 PM UTC, Bret Wortman wrote:

>  Yeah, I was starting to think that was the solution.  
>
>
That's not strictly necessary, you can install a Puppet Master with Puppet 
just fine, the problem you're running into is how to manage the Puppet CA 
across multiple Masters. This is not an easy problem to solve. If you start 
a master for the first time it will initialise it's own personal CA and 
certificate. This will conflict with the cert it got from the *other* 
master when it was installed and probably the cause of your connectivity 
problems. Also, your other agents won't be able to jump between masters 
because the CAs are different.

I would break the problem into these tasks:

- Decide on a centralised CA (a Puppet Master Master even) that you can 
generate other Puppet Master certificates from and give that cert the 
'puppet' alias if you use it at your sites (puppet ca generate 
woof.hostname.com --dns-alt-names puppet)
- Figure out how to get this Cert and the Master CA onto your new Puppet 
Master instead of letting the Puppet Mater. NFS? HTTPS download? Package?
- Figure out how to share certificates between Puppet Masters so an Agent 
can check in to different Puppet Masters. Centralised CA? Multi-way rsync?

-Luke

-- 
> Bret Wortman
> http://bretwortman.com/
> http://twitter.com/bretwortman
>
> On Wednesday, December 12, 2012 at 5:26 PM, Jakov Sosic wrote:
>
> On 12/12/2012 10:04 PM, Bret Wortman wrote:
>
> Is there an easy way to convert a puppet client into being a puppet master?
>
> Here's the scenario. I'm using puppet to configure all my systems, and
> would like it to be able to deploy a new puppet master as well. We have
> systems worldwide so having local puppet masters is very desirable for
> fault tolerance. So Kickstart (via cobbler) installs a puppet client
> during the initial system installation, then puppet installs everything
> else. And I've written a puppet-server module to attempt to deploy the
> puppet-server package, but I end up getting into certificate problems
> every time.
>
> The initial cert draws complaints, so I delete it and clean the
> certificate from the master, but then the systems will not connect under
> any circumstances:
>
> # puppet agent -t
> Exiting: no certificate found and waitforcert is disabled
>
> There's no request on the master (either this or the other).
>
> Thoughts?
>
>
> You should deploy master through cobbler, or run masterless puppet to
> set up the master.
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Puppet Users" group.
> To post to this group, send email to puppet...@googlegroups.com
> .
> To unsubscribe from this group, send email to 
> puppet-users...@googlegroups.com .
> For more options, visit this group at 
> http://groups.google.com/group/puppet-users?hl=en.
>
>  
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/tQYBNKzPoQAJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: How to handle multi-variable cross cutting concerns in hiera?

2012-12-11 Thread Luke Bigum


On Tuesday, December 11, 2012 5:10:48 PM UTC, Schofield wrote:
>
>
> Hiera allows you to lay out your data in two dimensions: data file and 
>> key.  Whatever selection rules you want to use to choose particular data 
>> need to operate in that context.  There are at least three ways in which 
>> you can embed additional dimensions:
>>
>>1. You can create separate hierarchies or hierarchy pieces based on 
>>node data, by interpolating the data into the hierarchy definition file
>>2. You can use compound keys
>>3. You can expand your values into hashes (with the hash keyspace 
>>constituting an additional dimension)
>>
>> Would you mind going into detail on options 2 and 3? 
>
>  
>
>> Those can be used separately or in combination, and even in 
>> self-combination, so in principle, you can use as many dimensions as you 
>> want.  In practice, it can get very messy, very quickly.
>>
>
> Getting messy, quickly is my concern if the hierarchy is not the best fit 
> for the enterprise or the enterprise architecture changes. Are there any 
> rules of thumb to consider that would suggest hiera is not the best data 
> externalization tool and someone might be better off with a RDMS or 
> denormalized search index as the external data source?
>

I can't speak for John but I can take a guess at what he was getting at 
regarding hashes getting complicated. You can use Hiera to store complex 
information structures like the one below:

postfix_additional_settings:
  smtp_tls_security_level: encrypt
  tls_random_source: dev:/dev/urandom
  smtpd_use_tls: "yes"
  smtpd_tls_loglevel: 1

Then inside a Puppet manifest or template you can retrieve and handle the 
hash in a more concise manner than requesting each postfix configuration 
key individually. The Puppet and template snippet below will put any 
Postfix options I add to the hash above into my main.cf file without me 
having to go in and edit the postfix module itself:

Manifest:

  $postfix_additional_settings = hiera_hash('postfix_additional_settings', 
undef)
  $postfix_main_conf_file = '/etc/postfix/main.cf'
  file { $postfix_main_conf_file:
content => template("${module_name}/${postfix_main_conf_file}.erb"),
  }

Template snippet:

###
# Everything below here comes from the Hiera postfix_additional_settings 
hash #
###
<% if @postfix_additional_settings %>
<% postfix_additional_settings.sort.each do |key, val| -%>
<% if val -%>
<%=key%> = <%=val%>
<% end -%>
<% end -%>
<% end -%>

That's not too bad for a Postfix config where all the keys are unique and 
there's only one level of depth. I don't have to have much complexity in my 
template file to handle the different types of Postfix options my sites 
have, that's all in Hiera.

Now here's a more complex Template where we write a HAProxy configuration 
file. This hash:

haproxy_listen_hash:
  something:
bind: 
  ssl: 1.1.1.1:443 ssl crt /etc/pki/tls/private/1.1.1.1.pem
servers:
  woof: 2.2.2.2:80 check
opts:
  mode: tcp

Feeds this template:

<% haproxy_listen_hash.sort.each do |key, listen_hash| -%>

listen <%= key %>
<%   listen_hash.sort.each do |key, val|   -%>
<% if key == "bind" -%>
  # Bind to these addresses
  # ---
<%   val.sort.each do |subkey, subval|  -%>
  # <%= subkey %>
  bind <%= subval %>
<%   end -%>

<% elsif key == "servers" -%>
  # Forward traffic to these servers
  # 
<%   val.sort.each do |subkey, subval|  -%>
  server <%= subkey %> <%= subval %>
<%   end -%>

<% elsif key == "opts" -%>
  # Extra options
  # -
<%   val.sort.each do |subkey, subval|  -%>
  <%= subkey %> <%= subval %><% if subkey == "stick-table" && haproxy_peers 
%> peers mypeers<% end %>
<%   end -%>
<% elsif key == "stats" -%>
<%   val.sort.each do |subkey, subval| -%>
  stats <%= subkey %> <%= subval %>
<%   end -%>
<% end -%>
<%   end -%>
<% end -%>

It still works well for our purposes, but it's starting to get quite 
complicated. There are so many nested hashes the template is difficult to 
read. We do manage to preserve just raw haproxy information in Hiera though.

If you need to go there, using the hiera_hash function adds even more 
complexity. This will flatten Hiera keys top down through your Hierarchy 
into a single hash, useful for overriding different portions of your 
default hash in other parts of your hierarchy.

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/E66Qn8qzTR8J.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more option

[Puppet Users] Re: exec GIT Variable PS1

2012-12-11 Thread Luke Bigum
On Tuesday, December 11, 2012 4:18:45 PM UTC, MaTi Villagra wrote:

> Hello I'm trying to push PS1 variable at .bashrc file 
>
> exec { 'GIT PS1 Variable':
> cwd => '/home/developer/.bashrc',
> command => '/bin/echo "PS1='[\u@\h \W\$(__git_ps1  " \"" (%s)"\"")]\$ 
> ' " >> /home/developer/.bashrc',
> user => developer,
> group => developer,
>}
>

You're using single quotes for your Puppet string, but you've got single 
quotes in your bash PS1 line as well, so it's confusing the Puppet parser. 
You've got a single quote after the " [ ", so Puppet is probably 
interpreting the rest of the line as an array, which is why the error is 
complaining about a missing " ] ".

This function might help you get your escaping correct: 

http://docs.puppetlabs.com/references/latest/function.html#shellquote
 

>
> But client side  I get 
>
> Dec 11 10:15:43 glb7240 puppet-agent[19762]: Could not retrieve catalog 
> from remote server: Error 400 on SERVER: Syntax error at '['; expected ']' 
> at /etc/puppet/modules/defaults/manifests/bash-extras.pp:53
>
> If I copy command it work perfectly. Any toughs  ? 
>
>
> Thanks. Appreciate. 
>
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/6mmpf6xSkZgJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Puppet report aggregation

2012-12-10 Thread Luke Bigum
On Thursday, December 6, 2012 10:07:43 PM UTC, John Warburton wrote:

> On 6 December 2012 20:29, Luke Bigum >wrote:
>
>> I haven't looked at The Foreman in a while but in my mind it's more like 
>> Puppet Dashboard - correct me if I'm wrong. What I'm aiming for is a tool 
>> that can aid change / release management where we run Puppet --noop across 
>> the estate, gather all the reports, then summarise what changes will be 
>> applied (resolv.conf changes on all hosts, fstab changes on 20 hosts, 
>> service X refreshes on Y hosts).
>>
>> I don't really want to be searching for explicit resources changing 
>> across hosts, it's the resources I don't know about that worry me ;-) Is 
>> the foreman worth a look in this case?
>>
>> Luke, we use the puppet dashboard which aggregates all the reports and 
> then lets us suck down a CSV ("Export nodes as CSV" on front page) which 
> contains a status of all resources on all machine reporting. We run puppet 
> in noop all the time, so need similar reports you are requesting. It is 
> just a matter of slicing & dicing the csv to get what you want
>
> % wget http://localhost:3000/nodes.csv
>
>
Thanks John and Ohad,

I use Puppet Dashboard but I've never tried that control before ;-) That 
should do as a very good start.

Cheers,

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/VIHTWkFeoQQJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Puppet report aggregation

2012-12-06 Thread Luke Bigum
On Wednesday, December 5, 2012 12:50:43 PM UTC, ohad wrote:

> You could use foreman for that? filtering the hosts via search should 
> allow you to find the exact resources you are looking for?
>
> Ohad
>
>
Hi Ohad,

I haven't looked at The Foreman in a while but in my mind it's more like 
Puppet Dashboard - correct me if I'm wrong. What I'm aiming for is a tool 
that can aid change / release management where we run Puppet --noop across 
the estate, gather all the reports, then summarise what changes will be 
applied (resolv.conf changes on all hosts, fstab changes on 20 hosts, 
service X refreshes on Y hosts).

I don't really want to be searching for explicit resources changing across 
hosts, it's the resources I don't know about that worry me ;-) Is the 
foreman worth a look in this case?

-Luke
 

>
> On Tue, Dec 4, 2012 at 11:00 PM, Luke Bigum 
> > wrote:
>
>> Hi all,
>>
>> Can anyone recommend any tools for Puppet report aggregation? I'm 
>> interested in something that can take a given set of Puppet reports and 
>> summarise to me what resources have changed across all hosts.
>>
>> If nothing exists I will look to write one myself. In that case, is 
>> Puppet report format 3 valid for Puppet 3.0?
>>
>> http://projects.puppetlabs.com/projects/puppet/wiki/Report_Format_3
>>
>> Thanks,
>>
>> -Luke
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Puppet Users" group.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msg/puppet-users/-/Fcx6zByYGPQJ.
>> To post to this group, send email to puppet...@googlegroups.com
>> .
>> To unsubscribe from this group, send email to 
>> puppet-users...@googlegroups.com .
>> For more options, visit this group at 
>> http://groups.google.com/group/puppet-users?hl=en.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/zk_ospPVmYkJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Puppet report aggregation

2012-12-04 Thread Luke Bigum
Hi all,

Can anyone recommend any tools for Puppet report aggregation? I'm 
interested in something that can take a given set of Puppet reports and 
summarise to me what resources have changed across all hosts.

If nothing exists I will look to write one myself. In that case, is Puppet 
report format 3 valid for Puppet 3.0?

http://projects.puppetlabs.com/projects/puppet/wiki/Report_Format_3

Thanks,

-Luke

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/Fcx6zByYGPQJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Best way to manage routing entries

2012-12-04 Thread Luke Bigum
On Tuesday, December 4, 2012 7:52:20 PM UTC, Wolf Noble wrote:

> Hello all, 
>
> Is anyone managing custom static routes via puppet? if so, how? 
>
>
Yes, along with all other networking config files (Red Hat based ifcfg-* 
files).

We started with this module: https://github.com/heliostech/puppet-network

And then added routes and rules on our own.

We only have it managing the file content, we don't attempt to restart 
networking or repair the properties of existing networks, routes, etc. So 
it's not too much of an improvement over just using File resources to copy 
out templates.
 

> I'm wondering if there's a better cross-platform way of adding routes than 
> a custom init script that defines the routes that need to be associated 
> with each interface… 
>
> or maybe someone with extra tasty brains (zombies will like them even 
> more) has a defined type that can already do the fu necessary for 
> debian/rhel flavors of routing, and I just haven't found it yet... 
>
> Thoughts? 
>
>
>
>
>
>
>  
>
> This message may contain confidential or privileged information. If you 
> are not the intended recipient, please advise us immediately and delete 
> this message. See http://www.datapipe.com/legal/email_disclaimer/ for 
> further information on confidentiality and the risks of non-secure 
> electronic communication. If you cannot access these links, please notify 
> us by reply message and we will send the contents to you. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/dUdwgEBREDcJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Wrapper classes, ordering & anchors

2012-10-11 Thread Luke Bigum


On Thursday, October 11, 2012 3:09:02 PM UTC+1, llowder wrote:
>
>
>
> On Thursday, October 11, 2012 8:37:39 AM UTC-5, alcy wrote:
>>
>> Hello, 
>>
>> I have a class like: 
>>
>> class wrapper { 
>>   include foo 
>>   include bar 
>>   include baz 
>> } 
>>
>> And a node like: 
>>
>> node x { 
>>   include someclass 
>>   include wrapper 
>>   Class["someclass"]->Class["wrapper"] 
>> } 
>>
>> The class chaining in node x doesn't get respected. In irc I was 
>> suggested there being a possibility of this being related to #8040. 
>> Can anyone suggest if that indeed might be the case ? Is there a clear 
>> process to tell if certain chaining of classes or resources would 
>> mandate using anchors or not ? Just to be clear, there is no order 
>> required in the classes inside the wrapper class. But just that to 
>> ensure before any of these, the class "someclass" gets applied. Any 
>> ideas, and possible approaches would be nice. 
>>
>
>
> From what I can tell, this looks like the main use case for the "anchor 
> pattern" in stdlib.
>
> https://github.com/puppetlabs/puppetlabs-stdlib
>  
>

As I found out recently (Dan Bode clarified this on Oct 4th) anchors should 
only be needed for nested classes, unless I've misinterpreted that. The 
problem above is that you are defining a relationship on your 'wrapper' 
class, but your wrapper class has no relationship with the classes it 
itself declares (feature of the 'include' syntax). I think this should work:

class wrapper { 
  include foo 
  include bar 
  include baz 
  Class[foo] -> Class[wrapper]
  Class[bar] -> Class[wrapper]
  Class[baz] -> Class[wrapper]
} 

node x { 
  include someclass 
  include wrapper 
  Class["someclass"]->Class["wrapper"] 
} 

Or in your wrapper class you could change all the include statements to 
require statements, but then you are mixing two types of creating 
dependencies and personally I think it's "purer" to stay with the 
relationship chaining operators.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/622uffYRUW8J.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Scalability and performance

2012-10-10 Thread Luke Bigum


On Wednesday, October 10, 2012 7:44:48 AM UTC+1, Dan Bode wrote:
>
>
>
> On Tue, Oct 9, 2012 at 4:56 PM, Robjon >wrote:
>
>> Hi guys,
>>
>> I am pretty new to this space, playing around with a few tools.
>> I am trying to read up on how I would scale Puppet (or other tools) up in 
>> my installation, and came across this blog post comparing Puppet and 
>> CFEngine: 
>> http://www.blogcompiler.com/2012/09/30/scalability-of-cfengine-and-puppet-2/
>>
>> The numbers presented here are pretty extreme: CFEngine agents running 
>> 166 times faster than Puppet agents in a small installation
>
>
> The results of that paper are not very realistic. The benchmark is based 
> on doing nothing but running echo commands.  Since cfengine is written in C 
> (or C++) there is not question that it will perform many actions faster 
> than Puppet, but saying that it is 100X faster or whatever is disingenuous 
> (unless you can manage your infrastructure with nothing but echo commands).
> I would be more interested to see comparisons based on real admin tasks 
> like managing packages or services.
>
>
As Dan said, the test case is rather biased. However the raw numbers are 
believable: CFEngine is "faster". If performance is your be-all and 
end-all, or you are paying per CPU cycle, then CFEngine is hard to argue 
against., but I wouldn't discount Puppet just yet. How you scale Puppet 
depends a lot on how you use it. If you have very computationally expensive 
manifests (lots of resources) then your Master needs more power (or more 
Masters). If you describe your site in more detail then I think a lot of 
people here would be happy to describe their architecture or give 
recommendations.

Also this blog post is only talking about performance and no other 
considerations like the tools' communities, existing modules/examples and 
the language itself. A trial you might like to do yourself is how to do the 
same thing in both languages and evaluate this (the language's) scalability.

 
>
>> - and the difference is increasing?
>> Also, it seems to be the case that Puppet is more centralized which 
>> results in everything slowing down: "as the master gets more loaded, all 
>> the Puppet agents run slower".
>>
>
> it is possible to either run puppet with or without a master. If you want 
> more centralized control, use a master, if you need something that scales 
> to the extreme, run puppet without a master using puppet apply (which is 
> must more similar to how cfengine works)
>  
>
>>
>> Is this correct? Could some of you with more experience please comment on 
>> this?
>>
>> Thanks.
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Puppet Users" group.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msg/puppet-users/-/5LcBoBBaZGQJ.
>> To post to this group, send email to puppet...@googlegroups.com
>> .
>> To unsubscribe from this group, send email to 
>> puppet-users...@googlegroups.com .
>> For more options, visit this group at 
>> http://groups.google.com/group/puppet-users?hl=en.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/dp80WiHTKFkJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] deleting virtual users

2012-09-28 Thread Luke Bigum
This may not fit your requirements but a slightly safer alternative 
might be set your old users' shells to /bin/false and null out their 
password, rather than delete them. A small added bonus is if your UIDs 
are never reused then all your UIDs will resolve to user accounts, which 
can be helpful later down the track if you want to know who owned which 
file, rather than it just be an unresolved UID number.


On 27/09/12 20:52, erkan yanar wrote:

Hoi  John,

On Thu, Sep 27, 2012 at 06:09:28AM -0700, jcbollinger wrote:


On Wednesday, September 26, 2012 2:15:27 PM UTC-5, erkules wrote:

On Wed, Sep 26, 2012 at 12:00:10PM -0700, Jo Rhett wrote:

Realizing doesn't allow overrides. To remove the user:

@user ahab { ensure => absent }
realize User['ahab']

This may mean you need to use inheritence for the class the user is

defined in, creating a child class for the nodes you want to remove him on.
Oha,
that sounds like putting a lot of thinking building the configuration
system.



Indeed, yes.  It is a complex task, and thinking is required.

Note, by the way, that virtualization really doesn't have much to do with
this particular issue.  There are basically two ways to remove users with
Puppet:

1. The fairly safe way: manage the user, setting ensure => absent.  That
'ensure' parameters could be set on the actual resource declaration,
perhaps conditionally, or it could be overridden later, such as via a
collection.

ok,


2. The simple, but rather dangerous, way: use the 'resources'
metaresource to declare that all unmanaged, non-system user accounts should
be purged from the system.  This has the potential to bite you -- hard --
but it has the advantage that all you have to do is stop realizing a user
or else not declare him at all to have him removed.

I confess I like that idea.


Note that even approach (1) I designate only "fairly" safe.  It is that
because you explicitly specify all user removals, but nothing can change
the fact that removing users is inherently risky.


thx a lot
erkan




--
Luke Bigum
Senior Systems Engineer

Information Systems
Ph: +44 (0) 20 3192 2520
luke.bi...@lmax.com | http://www.lmax.com
LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN


FX and CFDs are leveraged products that can result in losses exceeding
your deposit.  They are not suitable for everyone so please ensure you
fully understand the risks involved.  The information in this email is not
directed at residents of the United States of America or any other
jurisdiction where trading in CFDs and/or FX is restricted or prohibited
by local laws or regulations.

The information in this email and any attachment is confidential and is
intended only for the named recipient(s). The email may not be disclosed
or used by any person other than the addressee, nor may it be copied in
any way. If you are not the intended recipient please notify the sender
immediately and delete any copies of this message. Any unauthorised
copying, disclosure or distribution of the material in this e-mail is
strictly forbidden.

LMAX operates a multilateral trading facility.  Authorised and regulated 
by the Financial Services Authority (firm registration number 509778) and
is registered in England and Wales (number 06505809). 
Our registered address is Yellow Building, 1A Nicholas Road, London, W11

4AN.

--
You received this message because you are subscribed to the Google Groups "Puppet 
Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Watch PuppetConf remotely

2012-09-27 Thread Luke Bigum

I missed Luke's keynote :-)

Will the recordings be available online sometime later?

On 27/09/12 17:16, Michelle Carroll wrote:

Hello,

PuppetConf is happening now, and we wanted to make sure everyone knew 
about the live streaming video. Even if you couldn't make it to San 
Francisco, you can watch talks in two of the large rooms. The schedule 
for streaming is posted here:


http://puppetlabs.com/blog/watch-the-puppetconf-live-video-stream/

and Luke is halfway through his keynote.

Thanks,
Michelle

--
Michelle Carroll
miche...@puppetlabs.com <mailto:miche...@puppetlabs.com>

Join us for PuppetConf 2012 in San Francisco: http://bit.ly/pcsig12

--
You received this message because you are subscribed to the Google 
Groups "Puppet Users" group.

To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



--
Luke Bigum
Senior Systems Engineer

Information Systems
Ph: +44 (0) 20 3192 2520
luke.bi...@lmax.com | http://www.lmax.com
LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN



FX and CFDs are leveraged products that can result in losses exceeding
your deposit.  They are not suitable for everyone so please ensure you
fully understand the risks involved.  The information in this email is not
directed at residents of the United States of America or any other
jurisdiction where trading in CFDs and/or FX is restricted or prohibited
by local laws or regulations.

The information in this email and any attachment is confidential and is
intended only for the named recipient(s). The email may not be disclosed
or used by any person other than the addressee, nor may it be copied in
any way. If you are not the intended recipient please notify the sender
immediately and delete any copies of this message. Any unauthorised
copying, disclosure or distribution of the material in this e-mail is
strictly forbidden.

LMAX operates a multilateral trading facility.  Authorised and regulated 
by the Financial Services Authority (firm registration number 509778) and
is registered in England and Wales (number 06505809). 
Our registered address is Yellow Building, 1A Nicholas Road, London, W11

4AN.

--
You received this message because you are subscribed to the Google Groups "Puppet 
Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Iterate over array to mount NFS directories

2012-09-27 Thread Luke Bigum

Hi Forrie,

Good to see you are almost there! As you've discovered the "looping" in 
Puppet isn't *really* looping, it's just a shorthand way of creating 
multiple resources. However, by combining that with a Defined Type, you 
can effectively reference the same array element multiple times using 
$name in the define - as we do with the File and Mount resources - and 
you achieve an approximation of a loop.


Also remember there was a bug in my original post, the File resource 
needs "default => directory" to create a directory, not "present", as 
that would create a file instead, but it looks like you've figured that 
out anyway.


Unfortunately there is no way to recursively create directories in 
Puppet. You will need to manage the File[/dce/prod/] directory outside 
of the nfs_mount resources. Why? If you have more than one Prod 
nfs_mount, you will get duplicate definitions when you try create 
/dce/prod/ inside the nfs_mount define.


In your example below you have a class called prod-nfs-mounts. Inside 
this class you could have a file { "/dce/prod": ensure => directory } 
resource to ensure the parent directory is created.


One mor bug: the Mount resource should have a dependency on the File 
resource - you can't mount before the mount point is there. Add a 
parameter 'require => File[$mount_point]' to the mount resource in the 
define.


Also just so you know, older versions of Puppet don't like dashes (-) in 
class and variable names, so I would recommend you use underscores where 
possible.


-Luke

On 26/09/12 21:59, Forrest Aldrich wrote:
I did some tinkering around and came up with something that partially 
works.  The one problem I ran into was this:


Sep 26 16:53:55 test-fms puppet-agent[11974]: 
(/Stage[main]/Prod-nfs-mounts/Prod-nfs-mounts::Nfs_mount[201202]/File[/dce/prod/201202]/ensure) 
change from absent to present failed: Could not set 'present on 
ensure: No such file or directory - /dce/prod/201202 at 
/etc/puppet/manifests/classes/prod-nfs-mounts.pp:19


I changed the default for the "ensure" value to be "directory" and 
that didn't help, but this is strange:


Sep 26 16:56:50 test-fms puppet-agent[12776]: 
(/Stage[main]/Prod-nfs-mounts/Prod-nfs-mounts::Nfs_mount[201202]/File[/dce/prod/201202]/ensure) 
change from absent to directory failed: Cannot create 
/dce/prod/201202; parent directory /dce/prod does not exist
Sep 26 16:56:51 test-fms puppet-agent[12776]: Finished catalog run in 
6.08 seconds


I can't recall, but there is a way to ensure the directory mount point 
is recursively present before a mount is attempted.


It should not be trying to create a local directory, but mounting from 
the remote host.I think I'm almost there, but here's the test 
class I have thus far:




$production = [ "201201", "201202", "201203" ]
$server = "de-prod-nas.ourdom.com"
$prefix = "/dce/prod"
$nfsopts= "tcp,hard,intr,rw,bg"

class prod-nfs-mounts {

define nfs_mount ( $server, $prefix, $state = "mounted" ) {

$mount_point = "${prefix}/${name}"

# If the state is "unmounted" the mount point 
'File' needs to be removed somehow, after

file { $mount_point:
ensure => $state ? {
"unmounted" => absent,
"absent"=> absent,
default => present,
}
}

mount { $mount_point:
ensure  => $state,
    atboot  => true,
options => "$nfsopts",
device  => "{$server}:${mount_point}",
}
}

nfs_mount { $production: server => $server, prefix => $prefix }
}





Thanks again ! :-)






--
Luke Bigum
Senior Systems Engineer

Information Systems
Ph: +44 (0) 20 3192 2520
luke.bi...@lmax.com | http://www.lmax.com
LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN



FX and CFDs are leveraged products that can result in losses exceeding
your deposit.  They are not suitable for everyone so please ensure you
fully understand the risks involved.  The information in this email is not
directed at residents of the United States of America or any other
jurisdiction where trading in CFDs and/or FX is restricted or prohibited
by local laws or regulations.

The information in this email and any attachment is confidential and is
intended only for the named recipient(s). The email may not be disclosed
or used by any person other than the addressee, nor may it be copied in
any way. If you are not the intended recipient please notify t

Re: [Puppet Users] Iterate over array to mount NFS directories

2012-09-26 Thread Luke Bigum

Hi Forrie,

My example below uses a defined type 
(http://docs.puppetlabs.com/puppet/2.7/reference/lang_defined_types.html) to 
group two related resources together, in this case the File resource for 
the mount point and the Mount resource for the NFS mount itself.


To answer your first question, no, "ensure=>absent" in a Mount resource 
will not remove the mount point directory. That is why I've wrapped the 
File and Mount point in a define, so you can use one definition of 
nfs_mount to control both mount point and NFS mount together, as in your 
scenario they are two closely related resources.


You will notice the $state parameter of my defined type is used in two 
places: for the 'ensure' parameter of the Mount (described in the docs 
for the Mount type) and the 'ensure' parameter of the File. Since the 
ensure parameter of the Mount type takes different arguments to the File 
type, I use a selector to transform the Mount point's state into a state 
I can use in a File resource. The selector (ensure => $state ? {...} ) 
basically says this:


"If $state is unmounted, the File is absent"
"If $state is absent, the File is absent"
"If $state is anything else, the File is directory"

file { $mount_point:
 ensure => $state ? {
 "unmounted" => absent,
 "absent" => absent,
 default => directory,
 }
}

I just noticed a bug in my original post, it should be "default => 
directory" to create a directory, not a file :-)


http://docs.puppetlabs.com/puppet/2.7/reference/lang_conditional.html#selectors

As for the second question about the iteration, the iteration works the 
same way for a defined type as it does for any core Puppet type (File, 
Mount, Service, etc). Although it's not really a procedural "loop", it's 
just a short hand way of writing out a set of resource definitions with 
exactly the same parameters, but the effect is the same.


So doing these :

file { $proddirs: ... }
mount { $proddirs: ... }

is the same as this:

nfs_mount { $proddirs: ... }

Now the $name parameter or is the namevar of the defined type 
(http://docs.puppetlabs.com/puppet/2.7/reference/lang_resources.html#namenamevar). 
It's the unique name of the resource, works the same as other Puppet 
Types, in the examples below it is the strings "apache" and 
"/mnt/server/woof":


package { "apache": }
nfs_mount { "/mnt/server/woof/": }

You can use $name inside a defined type just like any other variable / 
parameter.


So in an example use of my defined type, this definition:

nfs_mount{ "201201":
  state  => "mounted",
  server => "nfs-server.domain.com",
  prefix => "/our/prefix",
}

Will result in the following standard Puppet resources:

file { "/our/prefix/201201":
  ensure => "directory",
}
mount { "/our/prefix/201201":
  ensure => "mounted",
  device => "nfs-server.domain.com:/our/prefix/201201",
}

Hopefully that explains the use of the defined type in more detail. If 
you have any more questions, please ask :-)


-Luke

On 25/09/12 23:09, Forrie wrote:

Thank you for your reply :)

The head of the code would need something like this:

$server = "nfs-server.domain.com"
$prefix = "/our/prefix"

# Arrays to iterate over, which would be a little longer than this
$proddirs = [ "201201", "201202", "201203" ]
$testdirs = [ "201201", "201202", "201203" ]
$devdirs  = [ "201201", "201202", "201203" ]

$nfsopts  = "tcp,hard,intr,rw,bg"

By "iterate" I meant to work through a specific array, such as above.

Reading through the Mount part of the docs, I don't believe that 
"absent" will remove the actual directory point, it says:


"Set it to |absent| to unmount (if necessary) and remove the 
filesystem from the fstab"


So I would handle that by running another iteration over an array for 
each section that would have a routine to make sure it's "absent" and 
then also rmdir the entry in the filesystem.


I'm not understanding where the below is iterating or over where... as 
$name would need to be defined somehow.



Thanks!


On Tuesday, September 25, 2012 5:09:15 AM UTC-4, Luke Bigum wrote:

Hi Forrie,

With regards to your iteration question, you would need to use a
defined
type, something like this (untested):

define nfs_mount ( $server, $prefix, $state = "mounted" ) {
 $mount_point = "${prefix}/${name}"

 #If the state is "unmounted" the mount point 'File' is removed
 file { $mount_point:
 ensure => $state ? {

Re: [Puppet Users] problem with class include order

2012-09-25 Thread Luke Bigum

Hi Nikita,

On 25/09/12 10:05, Nikita Burtsev wrote:

Hello,

We have a weird problem with includes:
err: Could not retrieve catalog from remote server: Error 400 on 
SERVER: Could not find resource 'Class[Common_software]' for 
relationship from 'Class[Default_repositories]'


My wild guess would be that "common_software" gets in before 
"default_repositories" and thus the error message. If i run agent 
again it sometimes goes away and configuration gets applied, sometimes 
it does not. I even tried using stages, did not help.




No, this is not an ordering problem, because ordering (generally) 
doesn't come into it when talking classes and relationships. It doesn't 
matter in which order your classes are declared. It might be more 
helpful if I reword the error to:


"The class Default_repositories has a dependency on the class 
Common_software, but I can't find the class Common_software declared 
anywhere"


You say below you use inheritance. If your site classes inherit from 
common_software, then common_software is implicitly declared in your 
manifests, so you shouldn't see this problem. Are you *sure* all your 
child classes inherit properly?


When you say "it sometimes gets applied", is this for the same node or 
different nodes? I would expect this problem to always be there, not be 
intermittent. If it's intermittent, do you have other conditionals 
around the inclusion of your site software classes?



nodes.pp looks like this;

stage { pre: before => Stage[main] }

node 'basenode' {

class { 'default_repositories' : stage => pre }

include common_software-site
   
}

We have multiple (read: many) sites with configuration which varies 
here and there. To make things a bit more sane and to reduce 
duplication we decided to have common code base we ship to each site 
which then changes from site to site using inheritance mechanism, so, 
for example, there is class called "common_software" which defines 
some resources and then there is "common_software-site" which inherits 
base class and adds some functionality.


If I include child class in nodes.pp problem is there, but including 
parent class fixes the problem.


Any thoughts on the matter?

BR,
Nikita
--
You received this message because you are subscribed to the Google 
Groups "Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/X2-ki-AVTZ0J.

To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



--
Luke Bigum
Senior Systems Engineer

Information Systems
Ph: +44 (0) 20 3192 2520
luke.bi...@lmax.com | http://www.lmax.com
LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN


FX and CFDs are leveraged products that can result in losses exceeding
your deposit.  They are not suitable for everyone so please ensure you
fully understand the risks involved.  The information in this email is not
directed at residents of the United States of America or any other
jurisdiction where trading in CFDs and/or FX is restricted or prohibited
by local laws or regulations.

The information in this email and any attachment is confidential and is
intended only for the named recipient(s). The email may not be disclosed
or used by any person other than the addressee, nor may it be copied in
any way. If you are not the intended recipient please notify the sender
immediately and delete any copies of this message. Any unauthorised
copying, disclosure or distribution of the material in this e-mail is
strictly forbidden.

LMAX operates a multilateral trading facility.  Authorised and regulated 
by the Financial Services Authority (firm registration number 509778) and
is registered in England and Wales (number 06505809). 
Our registered address is Yellow Building, 1A Nicholas Road, London, W11

4AN.

--
You received this message because you are subscribed to the Google Groups "Puppet 
Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Iterate over array to mount NFS directories

2012-09-25 Thread Luke Bigum

Hi Forrie,

With regards to your iteration question, you would need to use a defined 
type, something like this (untested):


define nfs_mount ( $server, $prefix, $state = "mounted" ) {
$mount_point = "${prefix}/${name}"

#If the state is "unmounted" the mount point 'File' is removed
file { $mount_point:
ensure => $state ? {
"unmounted" => absent,
"absent" => absent,
default => present,
}
}

mount { $mount_point:
ensure => $state,
device => "{$server}:${mount_point}",
}
}

nfs_mount { $production: server => $server, prefix => $prefix}

See the documentation for the Mount type in Puppet and it's ensure 
parameter for possible values for $state in the define above - it's 
possible to have entries in /etc/fstab but not actually mounted, which 
should satisfy your two stage cleanup, or you can just set $state to 
'absent' straight away and clean up the both NFS mount and mount point. 
This means you need to maintain two arrays: one of active mount points 
and one of decomissioned mounts, however you probably don't need to keep 
the decomissioned mounts around for ever, once every server has cleaned 
themselves up they can be removed from the manifest.


http://docs.puppetlabs.com/references/latest/type.html#mount

Hope that helps,

-Luke

On 24/09/12 23:43, Forrie wrote:
I have many systems that require NFS mounts for production.  Rather 
than have one entry of file{} and mount{} per NFS import, in a *.pp 
file, I'd rather set up and iterate over an array.   Looking at the 
docs, I'm not quite sure how to do this properly.  We have three 
groups for which I would need this (production, development, test) 
that each have their own NFS mounts.


here's what I would use:

$server = "server.name.com"
$prefix = "/some/nfs/root"

# array
production = [
  "dir1",
  "dir2",
  "dir3",
  "dir4",
] # etc etc

Then issue a command to iterate and manage those NFS mounts.

Since these change from time-to-time, and require some pruning... I 
will be left with "unmanaged" resources (ie: directory mount points) 
scattered around that I will need to clean up.  I read through some 
tickets for feature requests and got lost in where this is going -- 
however, to keep the place neat and clean, I'd like to unmanage the 
mount points and the fstab entries after.   The idea of manually doing 
this from system to system isn't good.


I'm still new-ish to puppet, so any pointers would be appreciated.


Thanks.


--
You received this message because you are subscribed to the Google 
Groups "Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/MQ9gniWF4gUJ.

To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



--
Luke Bigum
Senior Systems Engineer

Information Systems
Ph: +44 (0) 20 3192 2520
luke.bi...@lmax.com | http://www.lmax.com
LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN


FX and CFDs are leveraged products that can result in losses exceeding
your deposit.  They are not suitable for everyone so please ensure you
fully understand the risks involved.  The information in this email is not
directed at residents of the United States of America or any other
jurisdiction where trading in CFDs and/or FX is restricted or prohibited
by local laws or regulations.

The information in this email and any attachment is confidential and is
intended only for the named recipient(s). The email may not be disclosed
or used by any person other than the addressee, nor may it be copied in
any way. If you are not the intended recipient please notify the sender
immediately and delete any copies of this message. Any unauthorised
copying, disclosure or distribution of the material in this e-mail is
strictly forbidden.

LMAX operates a multilateral trading facility.  Authorised and regulated 
by the Financial Services Authority (firm registration number 509778) and
is registered in England and Wales (number 06505809). 
Our registered address is Yellow Building, 1A Nicholas Road, London, W11

4AN.

--
You received this message because you are subscribed to the Google Groups "Puppet 
Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Puppet 2 vs Puppet 3

2012-09-19 Thread Luke Bigum
Puppet 2.7.19 is the latest stable release, Puppet 3.0 is still in RC. 
To give your proof of concept a fair trial, I'd say go with Puppet 2.7. 
All of the documentation on PuppetLabs' site is for 2.7 too.


http://yum.puppetlabs.com/el/6Server/products/x86_64/

On 19/09/12 16:29, Mark wrote:
Hi, I'm new to Puppet and, tbh, still evaluating Puppet and Chef. The 
time has come to install both in a proof-of-concept environment.


I'm wondering if I should install Puppet 2 (2.6.16 is available in my 
Yum repo) or whether I should go with Puppet 3.


Is there a document that lists the differences between the two major 
versions?


Thanks,
Mark


--
You received this message because you are subscribed to the Google 
Groups "Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/LXnTHsf8A_wJ.

To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



--
Luke Bigum
Senior Systems Engineer

Information Systems
Ph: +44 (0) 20 3192 2520
luke.bi...@lmax.com | http://www.lmax.com
LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN


FX and CFDs are leveraged products that can result in losses exceeding
your deposit.  They are not suitable for everyone so please ensure you
fully understand the risks involved.  The information in this email is not
directed at residents of the United States of America or any other
jurisdiction where trading in CFDs and/or FX is restricted or prohibited
by local laws or regulations.

The information in this email and any attachment is confidential and is
intended only for the named recipient(s). The email may not be disclosed
or used by any person other than the addressee, nor may it be copied in
any way. If you are not the intended recipient please notify the sender
immediately and delete any copies of this message. Any unauthorised
copying, disclosure or distribution of the material in this e-mail is
strictly forbidden.

LMAX operates a multilateral trading facility.  Authorised and regulated 
by the Financial Services Authority (firm registration number 509778) and
is registered in England and Wales (number 06505809). 
Our registered address is Yellow Building, 1A Nicholas Road, London, W11

4AN.

--
You received this message because you are subscribed to the Google Groups "Puppet 
Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Staging environment

2012-09-19 Thread Luke Bigum

On 19/09/12 06:11, Gonzalo Servat wrote:

Hi All,

In our environment, we use the $::environment variable extensively to 
determine if the host should have one set of mounts (e.g. production) 
or a different set of mounts (e.g. qa). This is just one example, but 
there are many others where the $::environment variable comes into play.


The problem is that I have a number of puppet changes that I want to 
test before merging into the production tree, so I've created a 
staging environment however, given the importance of the 
$::environment variable throughout the manifests, this won't work


I think the only way to get around this problem is to copy 
$::environment to $::my_environment or some such, and then change all 
the references to use $::my_environment. Then you could have a selective 
case in 'staging' where it forcefully sets $my_environment to either 
'prod' or 'qa' in your case.


Any suggestions? I'd like to point a number of production nodes at a 
secondary puppet server using --noop to see what would change, but 
then I run into SSL issues. Would be great if I could use puppet over 
cleartext http for this test, but I'm not sure if that's possible.




If you set up a second Puppet Master and synchronise the CA and all the 
signed certificates from your primary to your "slave" Master, it should 
work. I would forcefully turn certificate signing off on your "slave" in 
puppet.conf. Dan Bode wrote a great article ages ago about multi-master 
Puppet which you might want to reference: http://bodepd.com/wordpress/?p=7



Thanks in advance for any feedback.
Gonzalo
--
You received this message because you are subscribed to the Google 
Groups "Puppet Users" group.

To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



--
Luke Bigum
Senior Systems Engineer

Information Systems
Ph: +44 (0) 20 3192 2520
luke.bi...@lmax.com | http://www.lmax.com
LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN


FX and CFDs are leveraged products that can result in losses exceeding
your deposit.  They are not suitable for everyone so please ensure you
fully understand the risks involved.  The information in this email is not
directed at residents of the United States of America or any other
jurisdiction where trading in CFDs and/or FX is restricted or prohibited
by local laws or regulations.

The information in this email and any attachment is confidential and is
intended only for the named recipient(s). The email may not be disclosed
or used by any person other than the addressee, nor may it be copied in
any way. If you are not the intended recipient please notify the sender
immediately and delete any copies of this message. Any unauthorised
copying, disclosure or distribution of the material in this e-mail is
strictly forbidden.

LMAX operates a multilateral trading facility.  Authorised and regulated 
by the Financial Services Authority (firm registration number 509778) and
is registered in England and Wales (number 06505809). 
Our registered address is Yellow Building, 1A Nicholas Road, London, W11

4AN.

--
You received this message because you are subscribed to the Google Groups "Puppet 
Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Setting environment variables

2012-09-18 Thread Luke Bigum
If you are trying to set environment variables for a users then 
/etc/profile.d/ is your best bet on Red Hat flavoured OS' and 
/etc/sysconfig/ for services.


If you are trying to set an environment variable for Puppet agents 
itself it depends on how you run Puppet. As a daemon? 
/etc/sysconfig/puppet will work. From cron? Add the environment variable 
to the cron line.


If you are trying to modify the environment of the Ruby process that's 
running Puppet from within the same Puppet run then that may not be 
possible.


On 18/09/12 14:23, Bai Shen wrote:
I need to set some environment variables on some of my systems.  How 
can I do this with puppet?  I tried googling but just got results 
about setting variables in order to get puppet running, not setting 
them on the clients.


Thanks.
--
You received this message because you are subscribed to the Google 
Groups "Puppet Users" group.

To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



--
Luke Bigum
Senior Systems Engineer

Information Systems
Ph: +44 (0) 20 3192 2520
luke.bi...@lmax.com | http://www.lmax.com
LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN


FX and CFDs are leveraged products that can result in losses exceeding
your deposit.  They are not suitable for everyone so please ensure you
fully understand the risks involved.  The information in this email is not
directed at residents of the United States of America or any other
jurisdiction where trading in CFDs and/or FX is restricted or prohibited
by local laws or regulations.

The information in this email and any attachment is confidential and is
intended only for the named recipient(s). The email may not be disclosed
or used by any person other than the addressee, nor may it be copied in
any way. If you are not the intended recipient please notify the sender
immediately and delete any copies of this message. Any unauthorised
copying, disclosure or distribution of the material in this e-mail is
strictly forbidden.

LMAX operates a multilateral trading facility.  Authorised and regulated 
by the Financial Services Authority (firm registration number 509778) and
is registered in England and Wales (number 06505809). 
Our registered address is Yellow Building, 1A Nicholas Road, London, W11

4AN.

--
You received this message because you are subscribed to the Google Groups "Puppet 
Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Systems Provisioning

2012-09-18 Thread Luke Bigum
If you want the least amount of headache at the cost of security, here 
is a sanitised extract from my kickstarts:


#LB: attempt to revoke and delete the certificate for this hostname, this 
should stop us having
#to manually clean off every cert.
curl -k -X PUT -H "Content-Type: text/pson" --data 
'{"desired_state":"revoked"}' https://puppet:8140/production/certificate_status/$HOSTNAME
curl -k -X DELETE -H "Accept: pson" 
https://puppet:8140/production/certificate_status/$HOSTNAME
#LB: run Puppet, our hostname should be set correctly by now
puppet agent --test --pluginsync --report --environment testing


You will need this in auth.conf on your master:

#allow hosts to delete their own certificates
#path /certificate_status/([^/]+)$
path ~ /certificate_status/([^/]+)$
auth any
allow $1

Hope that helps,

-Luke

On 17/09/12 19:16, Douglas Garstang wrote:

I probably should have been clearer with my question. I was more
interested in how people are managing certificates? Even if you use
autosign, you still need to clean certificates manually.

Doug.

On Mon, Sep 17, 2012 at 6:25 AM, Keiran Sweet  wrote:

Hi There,
I manage a relatively large RHEL environment, we handle provisioning as
follows:

- PXE + Kickstart to bootstrap and install the base OS + Puppet client onto
the platform, be it VMWare or bare metal
- Kickstart post scripts put a basic puppet configuration file in place on
the host, and a number of the values for things such as environment and
puppetmaster come from Foreman's Macro's, this allows values in the ENC to
flow into the kickstart files before your first puppet run.

We then run in the %post section of the kickstart file the following:
- A Puppet run that bootstraps the puppet client using tags ie,  --tags
puppet::client
- A full puppet run via puppet agent -tov which applys the SOE to the
platform

That provides on first boot a fully configured RHEL server that includes all
our additional software and customisations in about 3-5 minutes (not
including POST)

In regards to certs, we have a relatively open autosign.conf on our build
networks, so we can provision servers , physical or virtual quite quickly by
just hitting F12 for a network boot. I am sure there are some cleaner/more
secure things we can do provisioning wise, however these have been slightly
hindered by the RHN Satellite server i've been slowly pulling out of the
environment at the same time, as it had the potential to break things if i
wasnt careful.

ENC wise, I can't recommend Foreman enough, version 1.x is just brilliant,
you can see the macros it can provide here:
http://theforeman.org/projects/foreman/wiki/TemplateWriting

Hope this helps,

K









On Sunday, September 16, 2012 7:22:03 AM UTC+1, Douglas wrote:

I'm wondering what people are doing systems provisioning with, ie the
process that gets puppet installed onto a system, running for the
first time, and also the handling of certificate signing and so forth.
I don't see this topic discussed much.

The mc-provision tools at
https://github.com/ripienaar/mcollective-server-provisioner don't seem
to be actively developed anymore, or at least I wasn't able to find
enough documentation to be able to effectively make use of it.

Doug

--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To view this discussion on the web visit
https://groups.google.com/d/msg/puppet-users/-/NrKmbHHiaq8J.

To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/puppet-users?hl=en.






--
Luke Bigum
Senior Systems Engineer

Information Systems
Ph: +44 (0) 20 3192 2520
luke.bi...@lmax.com | http://www.lmax.com
LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN


FX and CFDs are leveraged products that can result in losses exceeding
your deposit.  They are not suitable for everyone so please ensure you
fully understand the risks involved.  The information in this email is not
directed at residents of the United States of America or any other
jurisdiction where trading in CFDs and/or FX is restricted or prohibited
by local laws or regulations.

The information in this email and any attachment is confidential and is
intended only for the named recipient(s). The email may not be disclosed
or used by any person other than the addressee, nor may it be copied in
any way. If you are not the intended recipient please notify the sender
immediately and delete any copies of this message. Any unauthorised
copying, disclosure or distribution of the material in this e-mail is
strictly forbidden.

LMAX operates a multilateral trading facility.  Authorised and regulated 
by the Financial Services Authority (firm registration number 509778) and
is registered in England and Wales (numbe

Re: [Puppet Users] Need more information

2012-09-18 Thread Luke Bigum

Consider the following test.pp:


biguml@biguml-laptop:~$ cat test.pp
$duck = "quack!"

class dog {
$sound = "woof"
$duck = "woof"
notify { "local dog sound": message => $sound }
notify { "local duck sound": message => $duck }
notify { "top scope duck sound": message => $::duck }
}

class cat {
$sound = "meow"
notify { "local cat sound": message => $sound }
notify { "dog sound from cat class": message => $dog::sound }
}

include dog
include cat



And it's results:

biguml@biguml-laptop:~$ puppet apply test.pp  | grep defined
notice: /Stage[main]/Cat/Notify[local cat sound]/message: defined 
'message' as 'meow'
notice: /Stage[main]/Dog/Notify[local dog sound]/message: defined 
'message' as 'woof'
notice: /Stage[main]/Dog/Notify[top scope variable]/message: defined 
'message' as 'quack!'
notice: /Stage[main]/Dog/Notify[local foo]/message: defined 'message' as 
'woof'
notice: /Stage[main]/Cat/Notify[dog sound from cat class]/message: 
defined 'message' as 'woof'




You can see how I can reference variables in another class (dog sound in 
the cat class) through their "scope" (the name of the class) even though 
there is a local variable named the same thing in the cat class.


You can also see that the local variables takes precedence over the 
'global' top scope ($duck in this case). Top scope variables are 
anything that aren't defined in a class, so things out of your site.pp, 
also some internal Puppet variables are top scope.


Accessing variables in nested classes is simple as well: 
$module::subclass1::subclass2::subclass3:variablename


Does that help?

-Luke

On 17/09/12 19:27, Balasubramaniam Natarajan wrote:
On this particular link 
http://docs.puppetlabs.com/learning/variables.html#variables I am a 
bit confused about the following two statement.  Could someone explain 
to me as to what is this about, with a simple example ?


Every variable has a short local name and a long fully-qualified name. 
Fully qualified variables look like |$scope::variable|. Top scope 
variables are the same, but their scope is nameless. (For example: 
|$::top_scope_variable|.)


If you reference a variable with its short name and it isn’t present 
in the local scope, Puppet will also check the top scope;^1 
<http://docs.puppetlabs.com/learning/variables.html#fn:dynamic> this 
means you can almost always refer to global variables with just their 
short names.


--
Regards,
Balasubramaniam Natarajan
www.etutorshop.com/moodle/ <http://www.etutorshop.com/moodle/>

--
You received this message because you are subscribed to the Google 
Groups "Puppet Users" group.

To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



--
Luke Bigum
Senior Systems Engineer

Information Systems
Ph: +44 (0) 20 3192 2520
luke.bi...@lmax.com | http://www.lmax.com
LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN



FX and CFDs are leveraged products that can result in losses exceeding
your deposit.  They are not suitable for everyone so please ensure you
fully understand the risks involved.  The information in this email is not
directed at residents of the United States of America or any other
jurisdiction where trading in CFDs and/or FX is restricted or prohibited
by local laws or regulations.

The information in this email and any attachment is confidential and is
intended only for the named recipient(s). The email may not be disclosed
or used by any person other than the addressee, nor may it be copied in
any way. If you are not the intended recipient please notify the sender
immediately and delete any copies of this message. Any unauthorised
copying, disclosure or distribution of the material in this e-mail is
strictly forbidden.

LMAX operates a multilateral trading facility.  Authorised and regulated 
by the Financial Services Authority (firm registration number 509778) and
is registered in England and Wales (number 06505809). 
Our registered address is Yellow Building, 1A Nicholas Road, London, W11

4AN.

--
You received this message because you are subscribed to the Google Groups "Puppet 
Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Managing classes of machines

2012-09-14 Thread Luke Bigum
A very popular design is the "module per package/service" approach. Your 
modules become building blocks and your node definitions pull all the 
building blocks in to describe a certain machine. So if you've got web 
and database server types, your modules might be "apache" and "mysql".


If you are just starting out you might want a "dev_apache" and 
"prod_apache" module, but once you get more confident in your Puppet 
skills you will want to start coding your modules with a bit of 
flexibility, so you can use the same apache module for your development 
and production servers. After all, Puppet is about not repeating yourself.


The Pro Puppet book is a very good place to start - it starts simple and 
then goes into the apache::install, apache::config and apache::service 
sub-class design which I'm a big fan of.


Hope that helps,

-Luke

On 14/09/12 07:23, Gregory Orange wrote:

Hi everyone,
We've got a fairly small set of machines (perhaps 30) soon to be 
managed with puppet. We're looking for a good way to define which 
machines get which packages, and how those packages are configured on 
certain sets of machines.


e.g. Apache on devel and production-webserver machines, but not on 
producation-db machines. Apache should be configured differently on 
devel machines compared to production-webserver machines.


I've read a couple of conflicting opinions on this from mailing list 
archives (I think) pages on the puppetlabs website, and the Pro Puppet 
book, so I'm ignoring it all for the moment and asking for opinions here.


TIA,
Greg.




--
Luke Bigum
Senior Systems Engineer

Information Systems
Ph: +44 (0) 20 3192 2520
luke.bi...@lmax.com | http://www.lmax.com
LMAX, Yellow Building, 1A Nicholas Road, London W11 4AN


FX and CFDs are leveraged products that can result in losses exceeding
your deposit.  They are not suitable for everyone so please ensure you
fully understand the risks involved.  The information in this email is not
directed at residents of the United States of America or any other
jurisdiction where trading in CFDs and/or FX is restricted or prohibited
by local laws or regulations.

The information in this email and any attachment is confidential and is
intended only for the named recipient(s). The email may not be disclosed
or used by any person other than the addressee, nor may it be copied in
any way. If you are not the intended recipient please notify the sender
immediately and delete any copies of this message. Any unauthorised
copying, disclosure or distribution of the material in this e-mail is
strictly forbidden.

LMAX operates a multilateral trading facility.  Authorised and regulated 
by the Financial Services Authority (firm registration number 509778) and
is registered in England and Wales (number 06505809). 
Our registered address is Yellow Building, 1A Nicholas Road, London, W11

4AN.

--
You received this message because you are subscribed to the Google Groups "Puppet 
Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



  1   2   3   >