Re: [Puppet Users] Re: many agents connecting at same time and 100+ nodes failed.

2014-11-06 Thread james . eckersall
I used to have issues with the agent leaking memory over time.  This is 
going back to 2.6 days.
I implemented a cron job back then to restart the agent every night and 
never removed the job (even though I'm now running 3.6), so I don't know if 
there are still memory issues with the agent daemon.

If you were aiming for a smaller resource footprint on the server, the cron 
route would likely be better as it's one less daemon running 24-7 on each 
node.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/2dc3d340-3643-46d0-bebe-45254f1912a0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: many agents connecting at same time and 100+ nodes failed.

2014-11-06 Thread james . eckersall
Try using the splay config option on the agents.  It should help to 
distribute the agent runs.
https://docs.puppetlabs.com/references/latest/configuration.html#splay

If that fails, you could try running the puppet agent from a cron job 
instead with randomised start times as per the below link.

http://mycfg.net/articles/random-start-times-for-cron-jobs-with-puppet.html

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/82975c46-fe25-4561-aab5-855d2124daec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: pull defined type into puppet template?

2014-10-24 Thread james . eckersall
 Hi Trey,

You could always use the concat module.
https://forge.puppetlabs.com/puppetlabs/concat
Create the global fragment in the class and then use a defined type to 
create a concat fragment for each entry that you want.  You can pass a 
template to each fragment.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/54cbab50-bac9-4c53-9ea3-ea395edc26dd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Re: puppetlabs-corosync: cs_property doesn't get set

2014-10-15 Thread james . eckersall
Hi,

In the cs_primitive provider, I added debug code to write the updated var 
to a file in /tmp.  I then tried applying that with crm update as the 
provider does.  Crm was returning errors to me that the provider wasn't.
I was able to use that to determine that I didn't have all the params set 
correctly.  For some reason, the provider wasn't reporting invalid params 
to me.
I've since learned about ruby pry, so that would probably be better than 
writing debug info to a file.

Regards
J

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/11429e39-3cb8-44fd-a526-2e586ae6c794%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Re: puppetlabs-corosync: cs_property doesn't get set

2014-10-13 Thread James Eckersall
Hi Georg,

I had issues when I was first using this module with the primitives.

I ended up adding some debug code to the provider to try and work out what
was going on.  I'm sure there are better ways of doing it, but I used
File.write to shove log messages into a file on the client in strategic
places.
Try adding something in the instances, create and flush methods to see what
it's doing.  instances should have any known properties from crm in the
instances array.  In the flush method, see if property_hash has any values.

I saw in one of the videos from puppetconf that you can also use pry (
http://pryrepl.org/).

require 'pry'
binding.pry

That should give you an interactive ruby shell at whatever point you drop
the above code.  Try sticking that in the instances, creates and flush
methods.  That way, you can check variable states, etc.

It might sound obvious, but make sure you have the latest version of the
corosync module.  I think it was completely rewritten fairly recently.

J

On 13 October 2014 14:32, ge...@riseup.net  wrote:

> Hi,
>
> On 14-10-13 05:58:06, james.eckers...@fasthosts.com wrote:
> > Since you aren't getting any errors, it would suggest to me that puppet
> > thinks those values are already set correctly and therefore require no
> > further action.
> > The module essentially parses the output from "crm configure show xml",
> so
> > I'd check what is being returned in that output for those properties.
>
> Sorry, forgot this. Actually I've checked this last week after using the
> module for the first time and getting the errors. The output doesn't
> display anything regarding those properties. It seems, that those aren't
> being set, as I wrote in my first mail, hence crm_verify -LV showing
> errors.
>
> (I've wrapped the output to make it better readable.)
>
> 
>  feature_set="3.0.6"
> dc-uuid="gw8.prod.example.com" epoch="9" have-quorum="1" num_updates="4"
> update-client="crmd" update-origin="gw8.prod.example.com" validate-with=
> "pacemaker-1.2">
>   
> 
>   
>  value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff"/>
>  name="cluster-infrastructure" value="openais"/>
>  name="expected-quorum-votes" value="1"/>
>   
> 
> 
>  uname="gw8.prod.example.com"/>
> 
> 
> 
>   
> 
>
> > You might have to start digging into the provider code to see exactly
> what
> > it's doing.  The code doing the work for properties is in the file
> > lib/puppet/provider/cs_property/crm.rb.  Time to grab a Ruby hat :)
>
> I've had a look at this as well last week.
>
> Using this
> crm('configure', 'property', '$id="cib-bootstrap-options"',
> "#{@property_hash[:name]}=#{@property_hash[:value]}")
> while substituting name and value with the relevant content, from the
> cli does work, and the properties are being set.
>
> > As somewhat of an endorsement for the module, I'm using it on a 2 node
> > cluster, running Ubuntu 14 and it works fine for properties and
> primitives,
> > etc.
>
> Might there be a difference between Ubuntu and Debian? (Actually I don't
> think so, even the doc mentions that the module was tested on Debian
> Squeeze; just being quite clueless...)
>
> Thanks,
> Georg
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/20141013133258.GU4803%40debian
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CAOJYTO_a%2Bz0r1MqVaS1Lj9Y_wApqsgUCCctphHUxyYCUWpm3rA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Re: puppetlabs-corosync: cs_property doesn't get set

2014-10-13 Thread james . eckersall
Hi,

Since you aren't getting any errors, it would suggest to me that puppet 
thinks those values are already set correctly and therefore require no 
further action.
The module essentially parses the output from "crm configure show xml", so 
I'd check what is being returned in that output for those properties.
You might have to start digging into the provider code to see exactly what 
it's doing.  The code doing the work for properties is in the file 
lib/puppet/provider/cs_property/crm.rb.  Time to grab a Ruby hat :)

As somewhat of an endorsement for the module, I'm using it on a 2 node 
cluster, running Ubuntu 14 and it works fine for properties and primitives, 
etc.
The one thing that didn't work for me was resource-stickiness.  I had to 
set that manually as its part of rsc_defaults (not supported by the 
module), not properties.


On Monday, 13 October 2014 13:09:01 UTC+1, ge...@riseup.net wrote:
>
> Hi James, 
>
> On 14-10-13 01:27:05, james.e...@fasthosts.com  wrote: 
> > Try running the agent with --debug and --evaltrace to identify what 
> Puppet 
> > is doing in relation to those resources. 
>
> Thanks for your help. The log shows multiple lines like: 
> Mon Oct 13 13:40:11 +0200 2014 Cs_property[stonith-enabled] (info): 
> Starting to evaluate the resource 
> Mon Oct 13 13:40:11 +0200 2014 Cs_property[stonith-enabled] (info): 
> Evaluated in 0.00 seconds 
>
> Because of the size I've put the full log at [1]. 
>
> Still I'm missing an idea how to debug this further. 
>
> Regards, 
> Georg 
>
>
> [1] http://pastebin.com/raw.php?i=nLs7vDad 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/2a8d03dd-27ab-480c-ae66-f8231820f4a9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: puppetlabs-corosync: cs_property doesn't get set

2014-10-13 Thread james . eckersall
Try running the agent with --debug and --evaltrace to identify what Puppet 
is doing in relation to those resources.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/af0d1bb8-b1ce-45be-a0ad-a240a75f2b7c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: Does eyaml CLI have options to specify the private/public key paths?

2014-02-07 Thread james . eckersall
On Thursday, 6 February 2014 21:37:55 UTC, Larry Fast wrote:
The default value for the private key path in the eyaml CLI is 
./keys/private_key.pkcs7.pem.  Is there an CLI option to override the 
default?

yep :)

$ eyaml --help
Hiera-eyaml is a backend for Hiera which provides OpenSSL 
encryption/decryption for Hiera properties

Usage:
  eyaml [options] 
  eyaml -i file.eyaml   # edit a file
  eyaml -e -s some-string   # encrypt a string
  eyaml -e -p   # encrypt a password 
  eyaml -e -f file.txt  # encrypt a file
  cat file.txt | eyaml -e   # encrypt a file on a pipe

Options:  
 --createkeys, -c:   Create public and private keys for use 
encrypting properties
--decrypt, -d:   Decrypt something
--encrypt, -e:   Encrypt something
   --edit, -i :   Decrypt, Edit, and Reencrypt
  --eyaml, -y :   Source input is an eyaml file
   --password, -p:   Source input is a password entered on the 
terminal
 --string, -s :   Source input is a string provided as an 
argument
   --file, -f :   Source input is a file
  --stdin:   Source input is taken from stdin
 --encrypt-method, -n :   Override default encryption and decryption 
method (default is PKCS7) (default: pkcs7)
 --output, -o :   Output format of final result (examples, 
block, string) (default: examples)
  --label, -l :   Apply a label to the encrypted result
  --debug:   Be more verbose
  --quiet:   Be less verbose
*   --pkcs7-public-key, -k :   Public key directory (default: 
./keys/public_key.pkcs7.pem)*
*  --pkcs7-private-key, -r :   Private key directory (default: 
./keys/private_key.pkcs7.pem)*
--version, -v:   Print version and exit
   --help, -h:   Show this message

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/4a78c0a5-d8b2-4487-987e-a7d60b38b072%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Puppet Users] Re: Run stages of virtual resources

2014-01-06 Thread james . eckersall
On Monday, 6 January 2014 15:59:53 UTC, Joseph Swick wrote:
>
> On 01/02/2014 03:18 AM, james.e...@fasthosts.com  wrote: 
> > How about chaining the resources, ala 
> > 
> http://docs.puppetlabs.com/puppet/2.7/reference/lang_relationships.html#chaining-arrows
>  
> > . 
> > 
> > Yumrepo <| |> -> Package<| |> 
> > 
> > This declared in site.pp should apply globally to all nodes and would 
> avoid 
> > the use of run stages (if I understand it correctly). 
> > 
> > J 
> > 
>
> One caveat to this is that if you are defining any packages (or the 
> YumRepos) virtually and then adding them to modules with 'realize', the 
> resource collector will realize all of the virtual packages, regardless 
> whether if you're realizing them in a module or not.  This behavior is 
> documented in the resource chaining documentation. 
>
> I ran into this personally with a couple of custom modules that we use 
> virtual packages with so that we don't get duplicate resource errors 
> when managing various packages.  My case was very similar, I had a 
> custom Yum repo I wanted to ensure that was put in place before puppet 
> tried to install a package out of it, so I had defined the chaining 
> within the module for the yum repo of: 
>
> Yumrepo['CustomRepo'] -> Package <| name == 'CustomPackage' |> 
>
> However, when I moved the custom package into our virtual package 
> resources, that package started getting realized on machines that didn't 
> need it, but I had forgotten that I had done the above resource 
> chaining.  Fortunately, we did provide a way to require repos with our 
> virtual package definitions, so I was able to remove the resource 
> chaining and still have the desired result. 
>
> -- 
> Joseph Swick > 
> Operations Engineer 
> Meltwater Group 
>
>
Yeah I didn't spot this in the docs before I posted.
Thanks for correcting me :)

J
 

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/d071daa8-caf7-4c36-adbc-75853197ba44%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: Run stages of virtual resources

2014-01-02 Thread james . eckersall
How about chaining the resources, ala 
http://docs.puppetlabs.com/puppet/2.7/reference/lang_relationships.html#chaining-arrows
.

Yumrepo <| |> -> Package<| |>

This declared in site.pp should apply globally to all nodes and would avoid 
the use of run stages (if I understand it correctly).

J

On Tuesday, 31 December 2013 16:47:33 UTC, David Arroyo wrote:
>
> Our site has several dozen yum repositories. Pushing all yum repositories 
> to all servers isn't practical; it hurts performance, some repositories are 
> OS-specific, and some repositories cause conflicts with each other (we have 
> a ruby187 repo and a ruby 193 repo, for example). 
>
> In our current setup, we have one module with all our yumrepos defined 
> virtually: 
>
> class yumrepos { 
>   @yumrepo{'puppet': 
> … 
>   } 
>   @yumrepo{'python26': 
> … 
>   } 
>   … 
> } 
>
> And our various modules realize those resources as needed: 
>
> class puppet(...) { 
>   realize Yumrepo['puppet'] 
>   … 
> } 
>
> However, this requires every package definition to require the Yumrepo 
> resource. I can ease the pain with resource defaults, but it doesn't go 
> away completely. I have found on puppet 2.7 that virtual resources are 
> evaluated in the run stage they are defined in, not the run stage they are 
> realized in, so that I can do in site.pp: 
>
> stage{'package-setup': before => Stage['main'] } 
> class{'yumrepos': stage => 'package-setup' } 
>
> Then all yum repositories that a node will use are on the machine before 
> any packages are installed. Is this a kosher use of run stages? Am I going 
> to be surprised by something I didn't consider? I have only tested this 
> behavior in Puppet 2.7 and don't know if it is subject to change in later 
> releases. How do others handle this problem? 
>
> -David

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/8d9ab53a-8f85-4646-baac-ebadd693a094%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


RE: [Puppet Users] Re: Random Internal Server Error after upgrading from 2.7.19 to 3.3.1

2013-12-10 Thread James Eckersall
And thanks from me.

Felix, thank you for your investigation on this issue, it's really appreciated 
:)

From: puppet-users@googlegroups.com [mailto:puppet-users@googlegroups.com] On 
Behalf Of Louise Baker
Sent: 10 December 2013 23:36
To: puppet-users@googlegroups.com
Subject: Re: [Puppet Users] Re: Random Internal Server Error after upgrading 
from 2.7.19 to 3.3.1

Thank you both for your responses :)

On Wed, Dec 11, 2013 at 9:25 AM, Felix Frank 
mailto:felix.fr...@alumni.tu-berlin.de>> wrote:
On 11/29/2013 01:39 PM, 
james.eckers...@fasthosts.com wrote:
>
> If you aren't setting the vardir explicitly in all clients puppet.conf,
> I'd suggest doing so before upgrading.
> After my upgrade, I had to manually intervene on almost all client nodes
> because puppet was failing to run.  Bug filed
> at http://projects.puppetlabs.com/issues/23311
I had some pokes at it. Unfortunately, it got duplicated in issue 23349.
Fortunately, that newer one has already been solved, so 3.4 should be good.

Thanks,
Felix

--
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/52A7A2F5.7050106%40Alumni.TU-Berlin.de.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the Google 
Groups "Puppet Users" group.
To unsubscribe from this topic, visit 
https://groups.google.com/d/topic/puppet-users/yXXuVN3Bb0w/unsubscribe.
To unsubscribe from this group and all its topics, send an email to 
puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CAPWqdEbVeEW%3D3DJVLreV4wLS0uoK1wae2rT8uJu7BSj-0SnY9w%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/62250036DF392146A154CA8F5F2CF3FC0CFCD2A0AC%40fh-exch07-01.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: puppetdb missing environment fact

2013-12-04 Thread james . eckersall
Thanks Luke. 

I just found a bug report describing that this behaviour has changed in 3.x 
http://projects.puppetlabs.com/issues/17692

For me, being able to determine the agent environment is very useful.
We use git for the Puppet manifests and each branch is an environment.
So we'll create new branches to test and deploy new features.
Then when it's ready to go live to all nodes, we'll merge that branch back 
into master and remove the feature branch.
I rely on being able to query puppetdb or puppet-dashboard to find out 
which nodes are using environment X, so I can safely remove a branch once 
there are no nodes using it.

We also manage Puppet with Puppet and set the environment in puppet.conf 
based on the current environment.  This does still work as the environment 
is available in the manifests.

I also found the following code (at 
https://groups.google.com/forum/#!topic/puppet-users/AM1o4Khloto) for 
turning environment into a fact.
Including it here in case it's useful to others.

require 'puppet'

Facter.add('environment') do
  setcode do
Puppet[:environment]
  end
end


J

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/c1146880-5afa-486b-bba0-6b45b9a91153%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] puppetdb missing environment fact

2013-12-04 Thread james . eckersall
Hi,

I'm seeing something rather strange with puppetdb (1.5.2) in regards to the 
environment fact.

On my puppetdb host:

If I run the following query:

curl -G 'http://localhost:8080/v3/facts' --data-urlencode 'query=["=", 
"name", "environment"]'

I would expect to receive the environment fact for every node that I'm 
managing with puppet (>500).

However, that query only returns 11 nodes.  These 11 nodes are running 
puppet 2.7.22.

I am in the process of upgrading puppet to 3.3.2 from 2.7.22.

All of the nodes running 3.3.2 are missing the environment fact from 
puppetdb.  All the 2.7.22 nodes have the environment fact stored.

Can anyone think of a reason why the environment fact is missing for my 
3.3.2 nodes?

J

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/0c8c5ba8-a08f-4b87-b817-e52f6806c5bb%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: Random Internal Server Error after upgrading from 2.7.19 to 3.3.1

2013-11-29 Thread james . eckersall
Hi,

For reference, I've just upgraded my puppet masters from 2.7.22 to 3.3.2 
and haven't seen any errors of this kind.

I presume you are running with passenger?  I am too.  CentOS EL6 masters.

Maybe there is a change between 3.3.1 and 3.3.2 that will resolve this for 
you both.

I have seen one nasty other error though.

If you aren't setting the vardir explicitly in all clients puppet.conf, I'd 
suggest doing so before upgrading.
After my upgrade, I had to manually intervene on almost all client nodes 
because puppet was failing to run.  Bug filed at 
http://projects.puppetlabs.com/issues/23311

J

On Friday, 29 November 2013 04:27:20 UTC, Laurent Domb wrote:
>
> I am running into the exact same issue with 3.3.1 Did you find a solution 
> for it? 
>
> On Thursday, October 24, 2013 1:54:28 AM UTC-4, Lou wrote:
>>
>> Hello,
>>
>> I have a rhel 6 puppet master with the following packages installed:
>>
>> facter.x86_641:1.7.3-1.el6
>> hiera.noarch  1.2.1-1.el6
>> puppet.noarch  3.3.1-1.el6 
>> puppet-server.noarch   3.3.1-1.el6
>> ruby.x86_64   1.8.7.352-12.el6_4
>>
>> I have recently upgraded the puppet master from 2.7.19 to 3.3.1, 
>> downloaded from the puppetlabs yum repo.
>>
>> I am now randomly seeing the following errors:
>>
>> 1.On the node I get:
>> …
>> Debug: catalog supports formats: b64_zlib_yaml dot pson raw yaml; using 
>> pson
>> Error: Could not retrieve catalog from remote server: Error 500 on 
>> SERVER: 
>> 
>> 500 Internal Server Error
>> 
>> Internal Server Error
>> The server encountered an internal error or
>> misconfiguration and was unable to complete
>> your request.
>> Please contact the server administrator,
>> root@localhost and inform them of the time the error occurred,
>> and anything you might have done that may have
>> caused the error.
>> More information about this error may be available
>> in the server error log.
>> 
>> Apache/2.2.15 (Red Hat) Server at  Port 8140
>> 
>>
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/indirector/rest.rb:185:in 
>> `is_http_200?'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/indirector/rest.rb:100:in `find'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/indirector/indirection.rb:197:in 
>> `find'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/configurer.rb:243:in 
>> `retrieve_new_catalog'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/util.rb:351:in `thinmark'
>> /opt/csw/lib/ruby/1.8/benchmark.rb:308:in `realtime'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/util.rb:350:in `thinmark'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/configurer.rb:242:in 
>> `retrieve_new_catalog'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/configurer.rb:67:in 
>> `retrieve_catalog'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/configurer.rb:107:in 
>> `prepare_and_retrieve_catalog'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/configurer.rb:159:in `run'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/agent.rb:45:in `run'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/agent/locker.rb:20:in `lock'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/agent.rb:45:in `run'
>> /opt/csw/lib/ruby/1.8/sync.rb:230:in `synchronize'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/agent.rb:45:in `run'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/agent.rb:119:in `with_client'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/agent.rb:42:in `run'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/agent.rb:84:in `run_in_fork'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/agent.rb:41:in `run'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/application.rb:179:in `call'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/application.rb:179:in 
>> `controlled_run'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/agent.rb:39:in `run'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/application/agent.rb:353:in 
>> `onetime'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/application/agent.rb:327:in 
>> `run_command'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/application.rb:364:in `run'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/application.rb:456:in `plugin_hook'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/application.rb:364:in `run'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/util.rb:504:in `exit_on_fail'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/application.rb:364:in `run'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/util/command_line.rb:132:in `run'
>> /opt/csw/lib/ruby/site_ruby/1.8/puppet/util/command_line.rb:86:in 
>> `execute'
>> /opt/csw/bin/puppet:4
>> Warning: Not using cache on failed catalog
>> Error: Could not retrieve catalog; skipping run
>>
>> 2.In /var/log/messages I see errors such as:
>>
>> Oct 24 14:01:57  puppet-master[13114]: Compiled catalog for 
>>  in environment production in 33.30 seconds
>> Oct 24 14:02:11  puppet-master[13114]: YAML in network requests 
>> is deprecated and will be removed in a future version. See 
>> http://links.puppetlabs.com/deprecate_yaml_on_network
>> Oct 24 14:02:11  puppet-master[13114]:(at 
>> /usr/lib/ruby/site_ruby/1.8/puppet/network/http/handler.rb:252:in 
>> `respons

[Puppet Users] Re: facter timeouts

2013-11-08 Thread james . eckersall

On Tuesday, 5 November 2013 15:14:26 UTC, jcbollinger wrote:
>
>
>
> On Monday, November 4, 2013 10:38:00 AM UTC-6, james.e...@fasthosts.comwrote:
>>
>> Hi,
>>
>> I am having some issues with facter on a couple of servers which have a 
>> large number of ip addresses.
>>
>> Essentially, all my puppet runs time out because facter takes in excess 
>> of 25 seconds to populate the facts.
>>
>> Here is the list of interfaces - pretty much each one has an IP assigned.
>>
>> interfaces => 
>> eth0,eth1,eth1_1,eth1_2,eth1_3,eth1_4,eth1_5,eth1_6,eth1_7,eth1_8,eth1_9,eth1_10,eth1_11,eth1_12,eth1_13,eth1_14,eth1_15,eth1_16,eth1_17,eth1_18,eth1_19,eth1_20,eth1_21,eth1_22,eth1_23,eth1_24,eth1_25,
>>
>> eth1_26,eth1_27,eth1_28,eth1_29,eth1_30,eth1_31,eth1_32,eth1_33,eth1_34,eth1_35,eth1_36,eth1_37,eth1_38,eth1_39,eth1_40,eth1_41,eth1_42,eth1_43,eth1_44,eth1_45,eth1_46,eth1_47,eth1_48,eth1_49,eth1_50,
>>
>> eth1_51,eth1_52,eth1_53,eth1_54,eth1_55,eth1_56,eth1_57,eth1_58,eth1_59,eth1_60,eth1_61,eth1_62,eth1_63,eth1_64,eth1_65,eth1_66,eth1_67,eth1_68,eth1_69,eth1_70,eth1_71,eth1_72,eth1_73,eth1_74,eth1_75,
>>
>> eth1_76,eth1_77,eth1_78,eth1_79,eth1_80,eth1_81,eth1_82,eth1_83,eth1_84,eth1_85,eth1_86,eth1_87,eth1_88,eth1_89,eth1_90,eth1_91,eth1_92,eth1_93,eth1_94,eth1_95,eth1_96,eth1_97,eth1_98,eth1_99,eth1_100,
>>
>> eth1_101,eth1_102,eth1_103,eth1_104,eth1_105,eth1_106,eth1_107,eth1_108,eth1_109,eth1_110,eth1_111,eth1_112,eth1_113,eth1_114,eth1_115,eth1_116,eth1_117,eth1_118,eth1_119,eth1_120,eth1_121,eth1_122,
>>
>> eth1_123,eth1_124,eth1_125,eth1_126,eth1_127,eth1_128,eth1_129,eth1_130,eth1_131,eth1_132,eth1_133,eth1_134,eth1_135,eth1_136,eth1_137,eth1_138,eth1_139,eth1_140,eth1_141,eth1_142,eth1_143,eth1_144,
>>
>> eth1_145,eth1_146,eth1_147,eth1_148,eth1_149,eth1_150,eth1_151,eth1_152,eth1_153,eth1_154,eth1_155,eth1_156,eth1_157,eth1_158,eth1_159,eth1_160,eth1_161,eth1_162,eth1_163,eth1_164,eth1_165,eth1_166,
>>
>> eth1_167,eth1_168,eth1_169,eth1_170,eth1_171,eth1_172,eth1_173,eth1_174,eth1_175,eth1_176,eth1_177,eth1_178,eth1_179,eth1_180,eth1_181,eth1_182,eth1_183,eth1_184,eth1_185,eth1_186,eth1_187,eth1_188,
>>
>> eth1_189,eth1_190,eth1_191,eth1_192,eth1_193,eth1_194,eth1_195,eth1_196,eth1_197,eth1_198,eth1_199,eth1_200,eth1_201,eth1_202,eth1_203,eth1_204,eth1_205,eth1_206,eth1_207,eth1_208,eth1_209,eth1_210,eth1_211,
>>
>> eth1_212,eth1_213,eth1_214,eth1_215,eth1_216,eth1_217,eth1_218,eth1_219,eth1_220,eth1_221,eth1_222,eth1_223,eth1_224,eth1_225,eth1_226,eth1_227,eth1_228,eth1_229,eth1_230,eth1_231,eth1_232,eth1_233,eth1_234,
>>
>> eth1_235,eth1_236,eth1_237,eth1_238,eth1_239,eth1_240,eth1_241,eth1_242,eth1_243,eth1_244,eth1_245,eth1_246,eth1_247,eth1_248,eth1_249,eth1_250,eth1_251,eth1_252,eth1_253,eth1_254,eth1_255,eth1_256,eth1_257,
>>
>> eth1_258,eth1_259,eth1_260,eth1_261,eth1_262,eth1_263,eth1_264,eth1_265,eth1_266,eth1_267,eth1_268,eth1_269,eth1_270,eth1_271,eth1_272,eth1_273,eth1_274,eth1_275,eth1_276,eth1_277,eth1_278,eth1_279,eth1_280,
>>
>> eth1_281,eth1_282,eth1_283,eth1_284,eth1_285,eth1_286,eth1_287,eth1_288,eth1_289,eth1_290,eth1_291,eth1_292,eth1_293,eth1_294,eth1_295,eth1_296,eth1_297,eth1_298,eth1_299,eth1_300,eth1_301,eth1_302,eth1_303,
>>
>> eth1_304,eth1_305,eth1_306,eth1_307,eth1_308,eth1_309,eth1_310,eth1_311,eth1_312,eth1_313,eth1_314,eth1_315,eth1_316,eth1_317,eth1_318,eth1_319,eth1_320,eth1_321,eth1_322,eth1_323,eth1_324,eth1_325,eth1_326,
>>
>> eth1_327,eth1_328,eth1_329,eth1_330,eth1_331,eth1_332,eth1_333,eth1_334,eth1_335,eth1_336,eth1_337,eth1_338,eth1_339,eth1_340,eth1_341,eth1_342,eth1_343,eth1_344,eth1_345,eth1_346,eth1_347,eth1_348,eth1_349,
>>
>> eth1_350,eth1_351,eth1_352,eth1_353,eth1_354,eth1_355,eth1_356,eth1_357,eth1_358,eth1_359,eth1_360,eth1_361,eth1_362,eth1_363,eth1_364,eth1_365,eth1_366,eth1_367,eth1_368,eth1_369,eth1_370,eth1_371,eth1_372,
>>
>> eth1_373,eth1_374,eth1_375,eth1_376,eth1_377,eth1_378,eth1_379,eth1_380,eth1_381,eth1_382,eth1_383,eth1_384,eth1_385,eth1_386,eth1_387,eth1_388,eth1_389,eth1_390,eth1_391,eth1_392,eth1_393,eth1_394,eth1_395,
>>
>> eth1_396,eth1_397,eth1_398,eth1_399,eth1_400,eth1_401,eth1_402,eth1_403,eth1_404,eth1_405,eth1_406,eth1_407,eth1_408,eth1_409,eth1_410,eth1_411,eth1_412,eth1_413,eth1_414,eth1_415,eth1_416,eth1_417,eth1_418,
>>
>> eth1_419,eth1_420,eth1_421,eth1_422,eth1_423,eth1_424,eth1_425,eth1_426,eth1_427,eth1_428,eth1_429,eth1_430,eth1_431,eth1_432,eth1_433,eth1_434,eth1_435,eth1_436,eth1_437,eth1_438,eth1_439,eth1_440,eth1_441,
>>
>> eth1_442,eth1_443,eth1_444,eth1_445,eth1_446,eth1_447,eth1_448,eth1_449,eth1_450,eth1_451,eth1_452,eth1_453,eth1_454,eth1_455,eth1_456,eth1_457,eth1_458,eth1_459,eth1_460,eth1_461,eth1_462,eth1_463,eth1_464,
>>
>> eth1_465,eth1_466,eth1_467,eth1_468,eth1_469,eth1_470,eth1_471,eth1_472,eth1_473,eth1_474,eth1_475,eth1_476,eth1_477,eth1_478,eth1_479,eth1_480,eth1_481,eth1_482,eth1_483,eth1_484,eth1_485,eth1_486,eth1_487,
>>
>> eth1_488,eth1_489,eth1_490,eth1_491,eth1_492,eth1_493,eth1_494,eth1_495,et

[Puppet Users] facter timeouts

2013-11-04 Thread james . eckersall
Hi,

I am having some issues with facter on a couple of servers which have a 
large number of ip addresses.

Essentially, all my puppet runs time out because facter takes in excess of 
25 seconds to populate the facts.

Here is the list of interfaces - pretty much each one has an IP assigned.

interfaces => 
eth0,eth1,eth1_1,eth1_2,eth1_3,eth1_4,eth1_5,eth1_6,eth1_7,eth1_8,eth1_9,eth1_10,eth1_11,eth1_12,eth1_13,eth1_14,eth1_15,eth1_16,eth1_17,eth1_18,eth1_19,eth1_20,eth1_21,eth1_22,eth1_23,eth1_24,eth1_25,
eth1_26,eth1_27,eth1_28,eth1_29,eth1_30,eth1_31,eth1_32,eth1_33,eth1_34,eth1_35,eth1_36,eth1_37,eth1_38,eth1_39,eth1_40,eth1_41,eth1_42,eth1_43,eth1_44,eth1_45,eth1_46,eth1_47,eth1_48,eth1_49,eth1_50,
eth1_51,eth1_52,eth1_53,eth1_54,eth1_55,eth1_56,eth1_57,eth1_58,eth1_59,eth1_60,eth1_61,eth1_62,eth1_63,eth1_64,eth1_65,eth1_66,eth1_67,eth1_68,eth1_69,eth1_70,eth1_71,eth1_72,eth1_73,eth1_74,eth1_75,
eth1_76,eth1_77,eth1_78,eth1_79,eth1_80,eth1_81,eth1_82,eth1_83,eth1_84,eth1_85,eth1_86,eth1_87,eth1_88,eth1_89,eth1_90,eth1_91,eth1_92,eth1_93,eth1_94,eth1_95,eth1_96,eth1_97,eth1_98,eth1_99,eth1_100,
eth1_101,eth1_102,eth1_103,eth1_104,eth1_105,eth1_106,eth1_107,eth1_108,eth1_109,eth1_110,eth1_111,eth1_112,eth1_113,eth1_114,eth1_115,eth1_116,eth1_117,eth1_118,eth1_119,eth1_120,eth1_121,eth1_122,
eth1_123,eth1_124,eth1_125,eth1_126,eth1_127,eth1_128,eth1_129,eth1_130,eth1_131,eth1_132,eth1_133,eth1_134,eth1_135,eth1_136,eth1_137,eth1_138,eth1_139,eth1_140,eth1_141,eth1_142,eth1_143,eth1_144,
eth1_145,eth1_146,eth1_147,eth1_148,eth1_149,eth1_150,eth1_151,eth1_152,eth1_153,eth1_154,eth1_155,eth1_156,eth1_157,eth1_158,eth1_159,eth1_160,eth1_161,eth1_162,eth1_163,eth1_164,eth1_165,eth1_166,
eth1_167,eth1_168,eth1_169,eth1_170,eth1_171,eth1_172,eth1_173,eth1_174,eth1_175,eth1_176,eth1_177,eth1_178,eth1_179,eth1_180,eth1_181,eth1_182,eth1_183,eth1_184,eth1_185,eth1_186,eth1_187,eth1_188,
eth1_189,eth1_190,eth1_191,eth1_192,eth1_193,eth1_194,eth1_195,eth1_196,eth1_197,eth1_198,eth1_199,eth1_200,eth1_201,eth1_202,eth1_203,eth1_204,eth1_205,eth1_206,eth1_207,eth1_208,eth1_209,eth1_210,eth1_211,
eth1_212,eth1_213,eth1_214,eth1_215,eth1_216,eth1_217,eth1_218,eth1_219,eth1_220,eth1_221,eth1_222,eth1_223,eth1_224,eth1_225,eth1_226,eth1_227,eth1_228,eth1_229,eth1_230,eth1_231,eth1_232,eth1_233,eth1_234,
eth1_235,eth1_236,eth1_237,eth1_238,eth1_239,eth1_240,eth1_241,eth1_242,eth1_243,eth1_244,eth1_245,eth1_246,eth1_247,eth1_248,eth1_249,eth1_250,eth1_251,eth1_252,eth1_253,eth1_254,eth1_255,eth1_256,eth1_257,
eth1_258,eth1_259,eth1_260,eth1_261,eth1_262,eth1_263,eth1_264,eth1_265,eth1_266,eth1_267,eth1_268,eth1_269,eth1_270,eth1_271,eth1_272,eth1_273,eth1_274,eth1_275,eth1_276,eth1_277,eth1_278,eth1_279,eth1_280,
eth1_281,eth1_282,eth1_283,eth1_284,eth1_285,eth1_286,eth1_287,eth1_288,eth1_289,eth1_290,eth1_291,eth1_292,eth1_293,eth1_294,eth1_295,eth1_296,eth1_297,eth1_298,eth1_299,eth1_300,eth1_301,eth1_302,eth1_303,
eth1_304,eth1_305,eth1_306,eth1_307,eth1_308,eth1_309,eth1_310,eth1_311,eth1_312,eth1_313,eth1_314,eth1_315,eth1_316,eth1_317,eth1_318,eth1_319,eth1_320,eth1_321,eth1_322,eth1_323,eth1_324,eth1_325,eth1_326,
eth1_327,eth1_328,eth1_329,eth1_330,eth1_331,eth1_332,eth1_333,eth1_334,eth1_335,eth1_336,eth1_337,eth1_338,eth1_339,eth1_340,eth1_341,eth1_342,eth1_343,eth1_344,eth1_345,eth1_346,eth1_347,eth1_348,eth1_349,
eth1_350,eth1_351,eth1_352,eth1_353,eth1_354,eth1_355,eth1_356,eth1_357,eth1_358,eth1_359,eth1_360,eth1_361,eth1_362,eth1_363,eth1_364,eth1_365,eth1_366,eth1_367,eth1_368,eth1_369,eth1_370,eth1_371,eth1_372,
eth1_373,eth1_374,eth1_375,eth1_376,eth1_377,eth1_378,eth1_379,eth1_380,eth1_381,eth1_382,eth1_383,eth1_384,eth1_385,eth1_386,eth1_387,eth1_388,eth1_389,eth1_390,eth1_391,eth1_392,eth1_393,eth1_394,eth1_395,
eth1_396,eth1_397,eth1_398,eth1_399,eth1_400,eth1_401,eth1_402,eth1_403,eth1_404,eth1_405,eth1_406,eth1_407,eth1_408,eth1_409,eth1_410,eth1_411,eth1_412,eth1_413,eth1_414,eth1_415,eth1_416,eth1_417,eth1_418,
eth1_419,eth1_420,eth1_421,eth1_422,eth1_423,eth1_424,eth1_425,eth1_426,eth1_427,eth1_428,eth1_429,eth1_430,eth1_431,eth1_432,eth1_433,eth1_434,eth1_435,eth1_436,eth1_437,eth1_438,eth1_439,eth1_440,eth1_441,
eth1_442,eth1_443,eth1_444,eth1_445,eth1_446,eth1_447,eth1_448,eth1_449,eth1_450,eth1_451,eth1_452,eth1_453,eth1_454,eth1_455,eth1_456,eth1_457,eth1_458,eth1_459,eth1_460,eth1_461,eth1_462,eth1_463,eth1_464,
eth1_465,eth1_466,eth1_467,eth1_468,eth1_469,eth1_470,eth1_471,eth1_472,eth1_473,eth1_474,eth1_475,eth1_476,eth1_477,eth1_478,eth1_479,eth1_480,eth1_481,eth1_482,eth1_483,eth1_484,eth1_485,eth1_486,eth1_487,
eth1_488,eth1_489,eth1_490,eth1_491,eth1_492,eth1_493,eth1_494,eth1_495,eth1_496,eth1_497,eth1_498,eth1_499,eth1_500,eth1_501,eth1_502,eth1_503,eth1_504,eth1_505,eth1_506,eth1_507,eth1_508,eth2,eth3,lo,sit0

There are just over 500 entries.  I also subsequently have >500 facts for 
ipaddress_eth1_xxx, netmask_eth1_xxx, network_eth1_xxx, 
macaddress_eth1_xxx, mtu_eth1_xxx, etc.

So

[Puppet Users] Re: Warning: Local environment: "42A" doesn't match server specified node environment "production", switching agent to "production"

2013-10-30 Thread james . eckersall
Hi,

I believe the following link should resolve this problem for you.

https://groups.google.com/forum/#!topic/foreman-users/p5w0if2AGlo

J

On Wednesday, 30 October 2013 08:47:13 UTC, AVE1810 wrote:
>
> Hi,
>
> When I run puppet agent --test --environment 42A, I have the following 
> warning :
> Warning: Local environment: "42A" doesn't match server specified node 
> environment "production", switching agent to "production".
> ...
>
> The puppet manifest for the environment "42A" isn't applied.
>
> The puppet version is 3.3.1-1puppetlabs1 on agent and puppetmaster node
>
> puppet.conf on the agent node :
>
> *[main]
> logdir=/var/log/puppet
> vardir=/var/lib/puppet
> ssldir=/var/lib/puppet/ssl
> rundir=/var/run/puppet
> factpath=$vardir/lib/facter
> templatedir=$confdir/templates
>
> pluginsync = true
>
> [agent]
> server = puppet
> report = true*
> ---
>
> puppet.conf on the puppetmaster node :
>
> *[main]
> logdir=/var/log/puppet
> vardir=/var/lib/puppet
> ssldir=/var/lib/puppet/ssl
> rundir=/var/run/puppet
> factpath=$vardir/lib/facter
> templatedir=$confdir/templates
>
> pluginsync = true
>
> [production]
> modulepath = /etc/puppet/environments/modules/production
> manifest = /etc/puppet/environments/manifests/production/site.pp
>
> [42A]
> modulepath = /etc/puppet/environments/modules/install/42A
> manifest = /etc/puppet/environments/manifests/install/site.pp
>
> [agent]
> server = puppet
> report = true
>
> [master]
> ssl_client_header = SSL_CLIENT_S_DN
> ssl_client_verify_header = SSL_CLIENT_VERIFY
>
> storeconfigs = true
> storeconfigs_backend = puppetdb
>
> reports=log,puppetdb,foreman
>
> external_nodes = /etc/puppet/node.rb
> node_terminus = exec*
> ---
>
> If i comment  the last two  lines (external_nodes and node_terminus) on 
> the puppetmaster puppet.conf node, The puppet manifest is applied correctly.
>
> Anybody has an idea ?
>
> Thanks
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/851581f1-d1cc-40b0-8ffd-84ea29597dd4%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: onlyif return code

2013-10-03 Thread james . eckersall
 The exec resource has an unless parameter too which I think is what you 
need.

From: http://docs.puppetlabs.com/references/latest/type.html#exec

onlyif If this parameter is set, then this exec will only run if the 
command returns 0

unless If this parameter is set, then this exec will run unless the command 
returns 0

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: mysql errors

2012-01-26 Thread James Eckersall
Looks like this message didn't reach the group for some reason.

For clarity though, I found that this was occurring on all three
masters.  I was able to resolve this by downgrading the activerecord
gem on the puppet masters from 3.1.3 to 3.0.5.

On 13 January 2012 08:31, jamese  wrote:
> I'm currently running three puppet masters (version 2.7.3 via apache
> +passenger) in a cluster, 2 on CentOS 5.7, 1 on CentOS 6.1
>
> On the 6.1 master, I am frequently getting the following error
> (approximately 50% of the time when a client connects):
>
> err: Could not retrieve catalog from remote server: Error 400 on
> SERVER: Mysql::Error: Unknown prepared statement handler (7) given to
> mysqld_stmt_execute: INSERT INTO `inventory_facts` (`name`, `node_id`,
> `value`) VALUES (?, ?, ?)
>
> I don't see any errors on the other two masters running CentOS 5.7.
>
> I have a separate server running CentOS 6.1 and MySQL 5.1.52 for the
> inventory db.
>
> On the masters, I have inventory configured in the puppet.conf as
> follows:
>   facts_terminus = inventory_active_record
>   dbadapter = mysql
>   dbname = inventory
>   dbuser = inventory
>   dbpass = 
>   dbserver = x.x.x.x
>
> The masters are running ruby enterprise 1.8.7 and all have exactly the
> same versions of ruby gems installed.
>
> *** LOCAL GEMS ***
>
> activemodel (3.1.3)
> activerecord (3.1.3)
> activesupport (3.1.3)
> arel (2.2.1)
> builder (3.0.0)
> facter (1.6.3)
> fastthread (1.0.7)
> i18n (0.6.0)
> multi_json (1.0.3)
> mysql (2.8.1)
> mysql2 (0.3.10)
> passenger (2.2.9)
> puppet (2.7.3)
> rack (1.1.0)
> rake (0.8.7)
> tzinfo (0.3.31)
>
> The only (potentially related) differences I can see between the
> masters are with the mysql-libs package (5.1.52-1 on EL6.1 and
> 5.0.77-4 on EL5.7) and the ruby-mysql package (ruby-mysql-2.8.2-1 on
> EL6.1, ruby-mysql-2.7.3-1 on EL5.7), although I'm not sure if this is
> relevant.
>
> Any help regarding these errors would be greatly appreciated.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.