Re: [Puppet Users] Re: puppet-dashboard 2.0.0 (open source) and postgresq 8.4l tuning

2014-12-19 Thread Pete Hartman
I'm no longer at that position, haven't seen it in 8 months
On Dec 19, 2014 3:48 PM, Gav gm...@blackrock.com wrote:

 Pete, what version of Passenger are you running? I have deployed
 puppet-dashboard 2.0.0 this week with Passenger 4.0.56 and Ruby 1.9.3, but
 Passenger is just eating the memory.

 -- Passenger processes ---
 PIDVMSize PrivateName
 --
 5173   6525.1 MB  3553.0 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 5662   5352.7 MB  4900.8 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 5682   5736.8 MB  5307.1 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 8486   6525.2 MB  4469.5 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 10935  6525.0 MB  3282.3 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 11885  6380.3 MB  3905.9 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 20886  209.8 MB   0.1 MB PassengerWatchdog
 20889  2554.9 MB  7.2 MB PassengerHelperAgent
 20896  208.9 MB   0.0 MB PassengerLoggingAgent
 21245  2602.8 MB  2268.6 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 22912  500.7 MB   115.4 MB   Passenger RackApp: /local/puppet/etc/rack
 24873  6505.1 MB  3592.6 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 26226  1944.3 MB  1616.6 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 29012  6525.0 MB  3460.4 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 30564  4072.7 MB  3675.4 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 31060  3526.8 MB  3181.6 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 31733  6505.5 MB  5761.4 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 31740  6525.4 MB  5812.2 MB  Passenger RackApp:
 /local/puppet/dashboard/dashboard
 ### Processes: 18
 ### Total private dirty RSS: 54910.21 MB

 Any help would be appreciated.

 Cheers,
 Gavin

 On Monday, 17 March 2014 20:29:26 UTC, Pete Hartman wrote:

 I deployed the open source puppet-dashboard 2.0.0 this past weekend for
 our production environment.  I did a fair amount of testing in the lab to
 ensure I had the deployment down, and I deployed as a passenger service
 knowing that we have a large environment and that webrick wasn't likely to
 cut it.  Overall, it appears to be working and behaving reasonably--I get
 the summary run status graph, etc, the rest of the UI.  Load average on the
 box is high-ish but nothing unreasonable, and I certainly appear to have
 headroom in memory and CPU.

 However, when I click the export nodes as CSV link, it runs forever
 (Hasn't stopped yet).

 I looked into what the database was doing and it appears to be looping
 over some unknown number of report_ids, doing

 7172 | dashboard | SELECT COUNT(*) FROM resource_statuses  WHERE
 resource_statuses.report_id = 39467 AND resource_statuses.failed
 = 'f' AND (
 IN ( | 00:00:15.575955
  :   SELECT resource_statuses.id FROM
 resource_statuses

  : INNER JOIN resource_events ON
 resource_statuses.id = resource_events.resource_status_id

  : WHERE resource_events.status = 'noop'

  : )

  : )



 I ran the inner join by hand and it takes roughly 2 - 3 minutes each
 time.  The overall query appears to be running 8 minutes per report ID.

 I've done a few things to tweak postgresql before this--it could have
 been running longer earlier when I first noticed the problem.

 I increased checkpoint segments to 32 from the default of 3, the
 checkpoint_completion_target to 0.9 from the default of 0.5, and to be able
 to observe what's going on I set stats_command_string to on.

 Some other details: we have 3400 nodes (dashboard is only seeing 3290 or
 so, which is part of why I want this CSV report to determine why it's a
 smaller number).  This postgresql instance is also the instance supporting
 puppetdb, though obviously a separate database.  The resource statuses
 table has 47 million rows right now, and the inner join returns 4.3 million.

 I'm curious if anyone else is running this version on postgresql with a
 large environment and if there are places I ought to be looking to tune
 this so it will run faster, or if I need to be doing something to shrink
 those tables without losing information, etc.

 Thanks

 Pete

  --
 You received this message because you are subscribed to a topic in the
 Google Groups Puppet Users group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/puppet-users/Cq6h0bl_wvw/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 puppet-users+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/puppet-users/9facbf64-4dab-4566-b967-1d36f1235e2f%40googlegroups.com
 https://groups.google.com/d/msgid/puppet-users/9facbf64-4dab-4566-b967-1d36f1235e2f

Re: [Puppet Users] Re: Puppet first run timing out

2014-05-12 Thread Pete Hartman
I doubt the problem I was referring to would have any impact on
CentOS.  The problem was that we were building on SPARC, and had used
compile options that caused the build to go to a lowest common
denominator architecture of SPARC V7, which does not have a lot of
important SPARC V9 assembly instructions for doing block copies and
math.  This slowed the crypto routines a LOT.

I suppose there might be some kind of similar behavior under CentOS
with the wrong build options, but I'd be at a complete loss to suggest
what they might be.



On Mon, May 12, 2014 at 10:13 AM, Mathew Winstone
mwinst...@coldfrontlabs.ca wrote:
 Any chance you can share what configuration options were non-optimal?

 We're having some timeout issues as well on CentOS.


 On Thursday, 5 September 2013 14:32:56 UTC-4, Pete Hartman wrote:

 Being able to set the timeout long enough gave us enough data to find the
 problem.

 Our SPARC build of OpenSSL used some configuration options that were,
 shall we say, non-optimal :-).

 On a corrected OpenSSL build the SPARC times are now in the same ballpark
 as the x86 times.

 Thanks again for your help Cory.

 Pete

 On Wednesday, September 4, 2013 6:56:34 PM UTC-5, Cory Stoker wrote:

 We have lots of puppet clients on crappy bandwidth that would time out
 like this as well.  The option we changed to fix this is:

 #Specify the timeout to wait for catalog in seconds
 configtimeout = 600

 The default time is like 60 or 120 secs.  Another thing you should do
 is check out the logs of the web server if you are using passenger.
 You should see a ton of GET requests when you need to sync plugins.
 To force your puppet agent to redownload stuff remove the $vardir/lib
 directory on the agent.


 On Wed, Sep 4, 2013 at 1:48 PM, Pete Hartman pete.h...@gmail.com wrote:
  I'm having a similar problem.
 
  I know for a fact that I am not contending with other agents, because
  this
  is in a lab environment and none of my agents is scheduled for periodic
  runs
  (yet).
 
  I have successfully run puppet agent -t first time, signed the cert,
  and run
  it a second time to pull over stdlib and other modules on agents
  running
  RHEL 6 and Solaris 10u10 x86.
 
  But I'm getting this timeout on a Solaris 10u10 box running on a T4-1
  SPARC
  system.
 
  This was my third run:
 
   # date;puppet agent -t;date
  Wed Sep  4 14:12:05 CDT 2013
  Info: Retrieving plugin
  Notice:
  /File[/var/lib/puppet/lib/puppet/parser/functions/count.rb]/ensure:
  defined content as '{md5}9eb74eccd93e2b3c87fd5ea14e329eba'
  Notice:
 
  /File[/var/lib/puppet/lib/puppet/parser/functions/validate_bool.rb]/ensure:
  defined content as '{md5}4ddffdf5954b15863d18f392950b88f4'
  Notice:
 
  /File[/var/lib/puppet/lib/puppet/parser/functions/get_module_path.rb]/ensure:
  defined content as '{md5}d4bf50da25c0b98d26b75354fa1bcc45'
  Notice:
 
  /File[/var/lib/puppet/lib/puppet/parser/functions/is_ip_address.rb]/ensure:
  defined content as '{md5}a714a736c1560e8739aaacd9030cca00'
  Error:
 
  /File[/var/lib/puppet/lib/puppet/parser/functions/is_numeric.rb]/ensure:
  change from absent to file failed: execution expired
 
  Error: Could not retrieve plugin: execution expired
  Info: Caching catalog for AGENT
  Info: Applying configuration version '1378322110'
  Notice: Finished catalog run in 0.11 seconds
  Wed Sep  4 14:15:58 CDT 2013
 
 
  Each time I've run it, I get about 10 or so files and then I get
  execution
  expired.
 
  What I'd really like to see is whether I can increase the expiry
  timeout.
 
 
  Some other details:  The master is RHEL 6 on a Sun/Oracle X4800, lots
  and
  lots of fast cores and memory.  I'm using Puppet Open Source. I'm using
  passenger.  I have no real modules other than some basic forge modules
  I've
  installed to start out with.
 
  [root@MASTER audit]# cd /etc/puppet/modules
  [root@MASTER modules]# ls
  apache  concat  epel  firewall  inifile  passenger  puppet  puppetdb
  ruby
  stdlib
 
  I briefly disabled SELinux on the master, but saw no change in
  behavior.
 
  I'm certain that the firewall is right because other agents have had no
  problems.  iptables IS enabled, however.
 
  The master and the agent are on the same subnet, so I don't suspect a
  network performance issue directly.
 
  On Solaris, because the vendor supplied OpenSSL is antique and doesn't
  include SHA256, we have built our own OpenSSL and our own Ruby using
  that
  OpenSSL Library.  Even though SPARC is a 64 bit architecture, Ruby
  seems to
  default to a 32 bit build, so we built OpenSSL as 32 bit as well to
  match.
  I've got an open question to the guy responsible for that to see how
  hard it
  would be to try to build Ruby as 64 bit, that's likely a next test.
 
  I have not yet run snoop on the communication to see what's going on
  the
  network side, but as I say I don't really expect the network to be the
  problem, between being on the same subnet and success on other systems
  with
  higher clock

[Puppet Users] Augeas Question

2014-04-28 Thread Pete Hartman
I hope this is an appropriate place for this question; if not, any 
redirection to a more appropriate place is appreciated.

So: I'm trying to set up puppet + augeas on an opensolaris system.  There 
are certain files I have that are under RCS version control--this is not 
all my doing, so some of these things I cannot change.  This creates files 
under /etc/ that are like filename,v.

I'm using puppet 2.7.22 and augeas 1.0.0.

Well, augtool, and apparently augeas invoked via puppet throw up on this. 
 Augtool reports

Failed to initialize Augeas
error: Invalid path expression
error: garbage at end of path expression
/augeas/files/etc/default/nfs|=|,v

I just want augeas to ignore these files...

I think the answer is to modify the 
/usr/local/share/augeas/lenses/dist/*.aug files that refer to things that 
are causing this grief and add excl clauses to explicitly exclude anything 
*,v. 

For example in shellvars.aug we have

let filter_default = incl /etc/default/*
. excl /etc/default/grub_installdevice*
. excl /etc/default/whoopsie

I've added 
. excl /etc/default/*,v

This appears to work.  But I'm not sure this is the right solution -- 
should I perhaps be making a copy of this to somewhere else and override 
the dist version?  Is there a more global way I could say ignore all files 
that have ,v after them ?

Thanks

Pete

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/017535fd-ff18-48e5-83eb-50d5f35b1a9e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] puppet-dashboard 2.0.0 (open source) and postgresq 8.4l tuning

2014-03-17 Thread Pete Hartman
I deployed the open source puppet-dashboard 2.0.0 this past weekend for our 
production environment.  I did a fair amount of testing in the lab to 
ensure I had the deployment down, and I deployed as a passenger service 
knowing that we have a large environment and that webrick wasn't likely to 
cut it.  Overall, it appears to be working and behaving reasonably--I get 
the summary run status graph, etc, the rest of the UI.  Load average on the 
box is high-ish but nothing unreasonable, and I certainly appear to have 
headroom in memory and CPU.

However, when I click the export nodes as CSV link, it runs forever 
(Hasn't stopped yet).  

I looked into what the database was doing and it appears to be looping over 
some unknown number of report_ids, doing 

7172 | dashboard | SELECT COUNT(*) FROM resource_statuses  WHERE 
resource_statuses.report_id = 39467 AND resource_statuses.failed = 
'f' AND (
IN ( | 00:00:15.575955
 :   SELECT resource_statuses.id FROM 
resource_statuses

 : INNER JOIN resource_events ON 
resource_statuses.id = resource_events.resource_status_id

 : WHERE resource_events.status = 'noop'

 : )

 : )



I ran the inner join by hand and it takes roughly 2 - 3 minutes each time.  
The overall query appears to be running 8 minutes per report ID.

I've done a few things to tweak postgresql before this--it could have been 
running longer earlier when I first noticed the problem.

I increased checkpoint segments to 32 from the default of 3, the 
checkpoint_completion_target to 0.9 from the default of 0.5, and to be able 
to observe what's going on I set stats_command_string to on.

Some other details: we have 3400 nodes (dashboard is only seeing 3290 or 
so, which is part of why I want this CSV report to determine why it's a 
smaller number).  This postgresql instance is also the instance supporting 
puppetdb, though obviously a separate database.  The resource statuses 
table has 47 million rows right now, and the inner join returns 4.3 million.

I'm curious if anyone else is running this version on postgresql with a 
large environment and if there are places I ought to be looking to tune 
this so it will run faster, or if I need to be doing something to shrink 
those tables without losing information, etc.

Thanks

Pete

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/d2909399-b071-43e4-9ad8-2c9d6cbc170c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: puppet-dashboard 2.0.0 (open source) and postgresq 8.4l tuning

2014-03-17 Thread Pete Hartman
I also increased bgwriter_lru_maxpages to 500 from the default 100.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/210bcd07-7fbd-4311-bd51-da7b2766c318%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Pulling my hair out with CA proxying

2013-10-02 Thread Pete Hartman
I do not have responsibility for the F5's and I'm not sure what my
networking team would be willing to do in terms of custom rules no
matter how simple.

The use of the apache proxy service on the masters is a configuration
documented and recommended (at least as one alternative) by
PuppetLabs; now that I have found what I was missing, I plan to stick
with that.

On Wed, Oct 2, 2013 at 2:08 AM, Gavin Williams fatmc...@gmail.com wrote:
 Pete

 I've not done this before, however am familiar with Puppet, and know a lot 
 more about F5s...

 I note that you say that you're expecting apache on the masters to proxy onto 
 the CA server.
 Is there any reason you couldn't use the F5 to select the CA server for any 
 CA requests?
 Should be a fairly straight forward iRule to do pool selection based on the 
 URI.

 Thoughts?

 Gav

 --
 You received this message because you are subscribed to a topic in the Google 
 Groups Puppet Users group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/puppet-users/xY5xnOU09Qg/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 puppet-users+unsubscr...@googlegroups.com.
 To post to this group, send email to puppet-users@googlegroups.com.
 Visit this group at http://groups.google.com/group/puppet-users.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Puppet Users] Pulling my hair out with CA proxying

2013-10-02 Thread Pete Hartman
I tried to update this, but apparently failed.

Problem was my own misunderstanding of apache.

1) the passenger module was loaded before the proxy module, so the app
was responding before apache could proxy the request
2) I didn't recognize this as a working fix at first because I also
omitted mod_proxy_http which was needed in addition to mod_proxy


Thanks...

Pete



On Wed, Oct 2, 2013 at 2:27 PM, Felipe Salum fsa...@gmail.com wrote:
 Can you paste your /etc/httpd/conf.d/puppetmaster.conf ?


 On Wednesday, October 2, 2013 5:35:58 AM UTC-7, Pete Hartman wrote:

 I do not have responsibility for the F5's and I'm not sure what my
 networking team would be willing to do in terms of custom rules no
 matter how simple.

 The use of the apache proxy service on the masters is a configuration
 documented and recommended (at least as one alternative) by
 PuppetLabs; now that I have found what I was missing, I plan to stick
 with that.

 On Wed, Oct 2, 2013 at 2:08 AM, Gavin Williams fatm...@gmail.com wrote:
  Pete
 
  I've not done this before, however am familiar with Puppet, and know a
  lot more about F5s...
 
  I note that you say that you're expecting apache on the masters to proxy
  onto the CA server.
  Is there any reason you couldn't use the F5 to select the CA server for
  any CA requests?
  Should be a fairly straight forward iRule to do pool selection based on
  the URI.
 
  Thoughts?
 
  Gav
 
  --
  You received this message because you are subscribed to a topic in the
  Google Groups Puppet Users group.
  To unsubscribe from this topic, visit
  https://groups.google.com/d/topic/puppet-users/xY5xnOU09Qg/unsubscribe.
  To unsubscribe from this group and all its topics, send an email to
  puppet-users...@googlegroups.com.
  To post to this group, send email to puppet...@googlegroups.com.
  Visit this group at http://groups.google.com/group/puppet-users.
  For more options, visit https://groups.google.com/groups/opt_out.

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Puppet Users group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/puppet-users/xY5xnOU09Qg/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 puppet-users+unsubscr...@googlegroups.com.
 To post to this group, send email to puppet-users@googlegroups.com.
 Visit this group at http://groups.google.com/group/puppet-users.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Pulling my hair out with CA proxying

2013-10-01 Thread Pete Hartman
I am trying to establish what looks like a common pattern for scaling 
puppet. My main departure is that I'm using an F5 rather than an apache 
load balancer.  Namely, I want to have my puppet agents go through the F5 
to a pool of master only systems, and any Certificate activity to get 
proxied by those masters through to one single Certificate Authority.  That 
CA system is not part of the F5 pool, it's role is to provide CA, Puppetdb 
and Postgresql.  It is configured as a master because that was the easiest 
way to get a CA stood up, but I don't intend to use it as a master in 
normal operation (and in fact I don't plan to have it hosting any modules).

I'm using RHEL 6, Apache, and Passenger, and Open Source Puppet.

I initially set up passenger using puppetlabs/passenger from the Forge, 
(which got me most of the way there but not fully configured).  All of 
these steps worked fine for the CA system to configure it as a working 
master (I have tested by registering systems with it, but then done puppet 
cert clean and wiped the test systems' ssl directories).

I then set up my first master-only system the same way, except I didn't 
actually start the master service (as the docs say) until after I had set 
ca = false and ca_server = $MY_CA_SERVER into /etc/puppet.conf.  I also 
made the necessary changes listed at 
http://docs.puppetlabs.com/guides/scaling_multiple_masters.html, including 
the certificate access on the CA system, the SSLProxyEngine on and 
ProxyPassMatch lines in the VHost definition in 
/etc/httpd/conf.d/puppetmaster.conf.  I'm positive I followed all the steps 
in the docs in order, but I'm not having any luck with external agents.

If I run puppet agent -t on the master-only system (with it's server in 
puppet.conf set to itself) it works fine--it can talk to the CA and talk to 
itself, and all is right with the world.

If I run puppet agent -t on a client host, pointing at the load balancer's 
address (or even pointing direclty at the master-only system's real 
hostname), I get:

[root@elmer ~]# puppet agent -t
Info: Creating a new SSL key for elmer.allstate.com
Error: Could not request certificate: Error 400 on SERVER: this master is 
not a CA
Exiting; failed to retrieve certificate and waitforcert is disabled


I've looked at the logs, enabled debug logging in the webserver with 
LogLevel, dug around everywhere I can think of, and I see no sign of any 
actual proxying going on.  tcpdump certainly shows no attempt by the 
master-only system to contact the CA.

What it LOOKS like is happening is that apache is not actually proxying 
anything, the request gets passed to the puppet master app running under 
passenger, and it (rightly) says I'm not a CA because 
/etc/puppet/puppet.conf says so.

I do not see any errors in the logs about proxy attempts failing for this 
agent.  I do see workers being attached for proxy purposes:

[Tue Oct 01 13:48:26 2013] [debug] proxy_util.c(1833): proxy: grabbed 
scoreboard slot 0 in child 27434 for worker 
https://caserver.allstate.com:8140/$1
[Tue Oct 01 13:48:26 2013] [debug] proxy_util.c(1852): proxy: worker 
https://caserver.allstate.com:8140/$1 already initialized
[Tue Oct 01 13:48:26 2013] [debug] proxy_util.c(1949): proxy: initialized 
single connection worker 0 in child 27434 for caserver.allstate.com)


I've repeatedly re-checked the settings in /etc/puppet.conf 
/etc/httpd/conf.d/passenger.conf, /et/chttpd/conf.d/puppetmaster.conf etc 
against the documentation and I am not seeing any errors.

This seems like I have to be overlooking something really basic, and I'm 
going to feel stupid when I find it, but it's right in my critical path 
right now and I can't see it.  Anyone have any suggestions?  I can provide 
config files and log files if need be, but I'm trying to avoid all the 
redacting I'd need to do (my server is not literally named caserver etc).

Thanks

Pete

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Puppet Users] Pulling my hair out with CA proxying

2013-10-01 Thread Pete Hartman
I have to do more testing to determine for certain, but it appears to
have been some combination of
1) the order in which modules were loaded, and
2) not having mod_proxy_http loaded.





On Tue, Oct 1, 2013 at 2:39 PM, Pete Hartman pete.hart...@gmail.com wrote:
 I am trying to establish what looks like a common pattern for scaling
 puppet. My main departure is that I'm using an F5 rather than an apache load
 balancer.  Namely, I want to have my puppet agents go through the F5 to a
 pool of master only systems, and any Certificate activity to get proxied
 by those masters through to one single Certificate Authority.  That CA
 system is not part of the F5 pool, it's role is to provide CA, Puppetdb and
 Postgresql.  It is configured as a master because that was the easiest way
 to get a CA stood up, but I don't intend to use it as a master in normal
 operation (and in fact I don't plan to have it hosting any modules).

 I'm using RHEL 6, Apache, and Passenger, and Open Source Puppet.

 I initially set up passenger using puppetlabs/passenger from the Forge,
 (which got me most of the way there but not fully configured).  All of these
 steps worked fine for the CA system to configure it as a working master (I
 have tested by registering systems with it, but then done puppet cert clean
 and wiped the test systems' ssl directories).

 I then set up my first master-only system the same way, except I didn't
 actually start the master service (as the docs say) until after I had set ca
 = false and ca_server = $MY_CA_SERVER into /etc/puppet.conf.  I also made
 the necessary changes listed at
 http://docs.puppetlabs.com/guides/scaling_multiple_masters.html, including
 the certificate access on the CA system, the SSLProxyEngine on and
 ProxyPassMatch lines in the VHost definition in
 /etc/httpd/conf.d/puppetmaster.conf.  I'm positive I followed all the steps
 in the docs in order, but I'm not having any luck with external agents.

 If I run puppet agent -t on the master-only system (with it's server in
 puppet.conf set to itself) it works fine--it can talk to the CA and talk to
 itself, and all is right with the world.

 If I run puppet agent -t on a client host, pointing at the load balancer's
 address (or even pointing direclty at the master-only system's real
 hostname), I get:

 [root@elmer ~]# puppet agent -t
 Info: Creating a new SSL key for elmer.allstate.com
 Error: Could not request certificate: Error 400 on SERVER: this master is
 not a CA
 Exiting; failed to retrieve certificate and waitforcert is disabled


 I've looked at the logs, enabled debug logging in the webserver with
 LogLevel, dug around everywhere I can think of, and I see no sign of any
 actual proxying going on.  tcpdump certainly shows no attempt by the
 master-only system to contact the CA.

 What it LOOKS like is happening is that apache is not actually proxying
 anything, the request gets passed to the puppet master app running under
 passenger, and it (rightly) says I'm not a CA because
 /etc/puppet/puppet.conf says so.

 I do not see any errors in the logs about proxy attempts failing for this
 agent.  I do see workers being attached for proxy purposes:

 [Tue Oct 01 13:48:26 2013] [debug] proxy_util.c(1833): proxy: grabbed
 scoreboard slot 0 in child 27434 for worker
 https://caserver.allstate.com:8140/$1
 [Tue Oct 01 13:48:26 2013] [debug] proxy_util.c(1852): proxy: worker
 https://caserver.allstate.com:8140/$1 already initialized
 [Tue Oct 01 13:48:26 2013] [debug] proxy_util.c(1949): proxy: initialized
 single connection worker 0 in child 27434 for caserver.allstate.com)


 I've repeatedly re-checked the settings in /etc/puppet.conf
 /etc/httpd/conf.d/passenger.conf, /et/chttpd/conf.d/puppetmaster.conf etc
 against the documentation and I am not seeing any errors.

 This seems like I have to be overlooking something really basic, and I'm
 going to feel stupid when I find it, but it's right in my critical path
 right now and I can't see it.  Anyone have any suggestions?  I can provide
 config files and log files if need be, but I'm trying to avoid all the
 redacting I'd need to do (my server is not literally named caserver etc).

 Thanks

 Pete

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Puppet Users group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/puppet-users/xY5xnOU09Qg/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 puppet-users+unsubscr...@googlegroups.com.
 To post to this group, send email to puppet-users@googlegroups.com.
 Visit this group at http://groups.google.com/group/puppet-users.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email

[Puppet Users] puppetlabs/puppetdb module when using passenger for master

2013-09-06 Thread Pete Hartman
I'm working on configuring a master in a lab environment, using Puppet Open 
Source.  My master is running RHEL 6.

I want to use modules to manage the master itself as much as possible, so I 
can use puppet to bootstrap itself as I go forward and move into production.

Using puppetlabs/puppetdb to configure puppetdb, I've overcome most of my 
issues but I have two questions.

1) I had to set max-threads higher than my CPU count in 
/etc/puppetdb/conf.d/jetty.ini before I could get jetty to behave well.  I 
haven't yet determined if there is a way through the puppetdb module to 
manage this directly--I plan to dig on that, but if someone knows off the 
top of their heads I'd like to know

2) less likely for me to find in the docs, and no luck googling so 
far--when puppetdb modules are applied, they attempt to restart 
puppetmaster.  But since the puppet master is actually running out of 
passenger/apache, the restart needs to be service httpd restart, not 
service puppetmaster restart, and the puppetmaster service restart fails. 
Is there a way to tell the module that I'm using passenger and should 
restart httpd instead?  Should I just link /etc/init.d/puppetmaster to 
/etc/init.d/httpd?  That seems like an obvious solution, but I'm not sure 
if it's right.

Thanks!

# cat manifests/master-config.pp
include epel
class { 'puppetdb':
listen_address = 'puppet.example.com',
open_listen_port = true,
}
class { 'puppetdb::master::config': }

selboolean { httpd_can_network_connect:
persistent = true,
value = on,
}

# puppet apply master-config.pp
Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults
Error: Could not start Service[puppetmaster]: Execution of '/sbin/service 
puppet master start' returned 1:
Error: /Stage[main]/Puppetdb::Master::Config/Service[puppetmaster]/ensure: 
change from stopped to running failed: Could not start 
Service[puppetmaster]: Execution of '/sbin/service puppetmaster start' 
returned 1:

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Puppet Users] puppetlabs/puppetdb module when using passenger for master

2013-09-06 Thread Pete Hartman
Yes, I did set the max-threads count manually.  I'm pretty green, and
I'm under a fairly aggressive schedule, so I'm not very likely to file
a patch soon, but maybe once I get through the next two months.I
expect not to be so green by then :-).  In the meantime I will work
out a workaround.  I only have 6 production masters to deploy so even
manual intervention will work but I figure I'll play with delivering
the file etc. before I go that route.

On the master service, that should help.  I will try the first
suggestion  and if it doesn't work I'll go with the second and make
sure my documentation has good warnings about watching it during
upgrades.

I really appreciate the pointers, thank you!

Pete

On Fri, Sep 6, 2013 at 3:49 PM, Ken Barber k...@puppetlabs.com wrote:
 1) I had to set max-threads higher than my CPU count in
 /etc/puppetdb/conf.d/jetty.ini before I could get jetty to behave well.  I
 haven't yet determined if there is a way through the puppetdb module to
 manage this directly--I plan to dig on that, but if someone knows off the
 top of their heads I'd like to know

 From a PuppetDB perspective, we already have a fix for this:
 http://projects.puppetlabs.com/issues/22168 ... just awaiting the next
 release of 1.3.x/1.4.x.

 The gist of the change is that we will no longer allow settings that
 'break' Jetty 7, and will just raise the setting to an acceptable
 level and warn the user instead. The 'bug' doesn't exist in Jetty 9,
 so it should go away when we eventually get around to upgrading that.

 As far as the module, we don't support it directly as you can see:
 https://github.com/puppetlabs/puppetlabs-puppetdb/blob/master/manifests/server/jetty_ini.pp
 ... I presume you fixed this for yourself by adding an ini_setting.
 I'm less impressed with this mechanism today, and in the future I want
 to rewrite this implementation so settings aren't so 'static' - that
 way we won't fall behind. Either way, feel free to raise a bug on the
 fact we don't manage such a setting, or for bonus points raise a
 patch.

 Bugs for the module are here:
 https://github.com/puppetlabs/puppetlabs-puppetdb/issues

 2) less likely for me to find in the docs, and no luck googling so far--when
 puppetdb modules are applied, they attempt to restart puppetmaster.  But
 since the puppet master is actually running out of passenger/apache, the
 restart needs to be service httpd restart, not service puppetmaster restart,
 and the puppetmaster service restart fails. Is there a way to tell the
 module that I'm using passenger and should restart httpd instead?  Should I
 just link /etc/init.d/puppetmaster to /etc/init.d/httpd?  That seems like an
 obvious solution, but I'm not sure if it's right.

 So if you have 'httpd' managed elsewhere, ie:

 service { 'httpd':
   ensure = running,
   enabled = true,
   hasstatus = true,
 }

 You can just use:

 class { 'puppetdb::master::config':
   puppet_service_name = 'httpd'
 }

 Although I'm wary of parse order being a problem here:
 https://github.com/puppetlabs/puppetlabs-puppetdb/blob/master/manifests/master/config.pp#L145-L149

 So ... if the above doesn't work for you try:

 class { 'puppetdb::master::config':
   restart_puppet = false,
 }
 Class['puppetdb::master::puppetdb_conf'] ~ Service['httpd']

 Which is nasty (and I'm warning you now it will probably break at a
 random point in the future, since it taps internals), but replicates
 the internals without trying to conditionally declare service {
 'httpd': }. *sigh*.

 ken.

 --
 You received this message because you are subscribed to a topic in the Google 
 Groups Puppet Users group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/puppet-users/uYUs9warywk/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 puppet-users+unsubscr...@googlegroups.com.
 To post to this group, send email to puppet-users@googlegroups.com.
 Visit this group at http://groups.google.com/group/puppet-users.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Puppet Users] Re: Puppet first run timing out

2013-09-05 Thread Pete Hartman
Being able to set the timeout long enough gave us enough data to find the 
problem.

Our SPARC build of OpenSSL used some configuration options that were, shall 
we say, non-optimal :-).

On a corrected OpenSSL build the SPARC times are now in the same ballpark 
as the x86 times.

Thanks again for your help Cory.

Pete

On Wednesday, September 4, 2013 6:56:34 PM UTC-5, Cory Stoker wrote:

 We have lots of puppet clients on crappy bandwidth that would time out 
 like this as well.  The option we changed to fix this is: 

 #Specify the timeout to wait for catalog in seconds 
 configtimeout = 600 

 The default time is like 60 or 120 secs.  Another thing you should do 
 is check out the logs of the web server if you are using passenger. 
 You should see a ton of GET requests when you need to sync plugins. 
 To force your puppet agent to redownload stuff remove the $vardir/lib 
 directory on the agent. 


 On Wed, Sep 4, 2013 at 1:48 PM, Pete Hartman 
 pete.h...@gmail.comjavascript: 
 wrote: 
  I'm having a similar problem. 
  
  I know for a fact that I am not contending with other agents, because 
 this 
  is in a lab environment and none of my agents is scheduled for periodic 
 runs 
  (yet). 
  
  I have successfully run puppet agent -t first time, signed the cert, and 
 run 
  it a second time to pull over stdlib and other modules on agents running 
  RHEL 6 and Solaris 10u10 x86. 
  
  But I'm getting this timeout on a Solaris 10u10 box running on a T4-1 
 SPARC 
  system. 
  
  This was my third run: 
  
   # date;puppet agent -t;date 
  Wed Sep  4 14:12:05 CDT 2013 
  Info: Retrieving plugin 
  Notice: 
 /File[/var/lib/puppet/lib/puppet/parser/functions/count.rb]/ensure: 
  defined content as '{md5}9eb74eccd93e2b3c87fd5ea14e329eba' 
  Notice: 
  
 /File[/var/lib/puppet/lib/puppet/parser/functions/validate_bool.rb]/ensure: 
  defined content as '{md5}4ddffdf5954b15863d18f392950b88f4' 
  Notice: 
  
 /File[/var/lib/puppet/lib/puppet/parser/functions/get_module_path.rb]/ensure: 

  defined content as '{md5}d4bf50da25c0b98d26b75354fa1bcc45' 
  Notice: 
  
 /File[/var/lib/puppet/lib/puppet/parser/functions/is_ip_address.rb]/ensure: 
  defined content as '{md5}a714a736c1560e8739aaacd9030cca00' 
  Error: 
  /File[/var/lib/puppet/lib/puppet/parser/functions/is_numeric.rb]/ensure: 
  change from absent to file failed: execution expired 
  
  Error: Could not retrieve plugin: execution expired 
  Info: Caching catalog for AGENT 
  Info: Applying configuration version '1378322110' 
  Notice: Finished catalog run in 0.11 seconds 
  Wed Sep  4 14:15:58 CDT 2013 
  
  
  Each time I've run it, I get about 10 or so files and then I get 
 execution 
  expired. 
  
  What I'd really like to see is whether I can increase the expiry 
 timeout. 
  
  
  Some other details:  The master is RHEL 6 on a Sun/Oracle X4800, lots 
 and 
  lots of fast cores and memory.  I'm using Puppet Open Source. I'm using 
  passenger.  I have no real modules other than some basic forge modules 
 I've 
  installed to start out with. 
  
  [root@MASTER audit]# cd /etc/puppet/modules 
  [root@MASTER modules]# ls 
  apache  concat  epel  firewall  inifile  passenger  puppet  puppetdb 
  ruby 
  stdlib 
  
  I briefly disabled SELinux on the master, but saw no change in behavior. 
  
  I'm certain that the firewall is right because other agents have had no 
  problems.  iptables IS enabled, however. 
  
  The master and the agent are on the same subnet, so I don't suspect a 
  network performance issue directly. 
  
  On Solaris, because the vendor supplied OpenSSL is antique and doesn't 
  include SHA256, we have built our own OpenSSL and our own Ruby using 
 that 
  OpenSSL Library.  Even though SPARC is a 64 bit architecture, Ruby seems 
 to 
  default to a 32 bit build, so we built OpenSSL as 32 bit as well to 
 match. 
  I've got an open question to the guy responsible for that to see how 
 hard it 
  would be to try to build Ruby as 64 bit, that's likely a next test. 
  
  I have not yet run snoop on the communication to see what's going on the 
  network side, but as I say I don't really expect the network to be the 
  problem, between being on the same subnet and success on other systems 
 with 
  higher clock speeds. 
  
  Any pointers to other possible causes or somewhere I can (even 
 temporarily) 
  increase the timeout would be appreciated. 
  
  
  
  
  On Thursday, August 8, 2013 8:56:33 AM UTC-5, jcbollinger wrote: 
  
  
  
  On Wednesday, August 7, 2013 11:46:06 AM UTC-5, Cesar Covarrubias 
 wrote: 
  
  I am already using Passenger. My master is still being minimally 
  utilized, as I'm just now beginning the deployment process. In terms 
 of 
  specs, it is running 4 cores and 8GB of mem and 4GB of swap. During a 
 run, 
  the total system usage is no more than 2GB and no swap. No network 
  congestion and I/O is low on the SAN which these VMs use. 
  
  The odd thing is once the hosts get all the libs sync'd

Re: [Puppet Users] Re: Puppet first run timing out

2013-09-04 Thread Pete Hartman
I'm having a similar problem.

I know for a fact that I am not contending with other agents, because this 
is in a lab environment and none of my agents is scheduled for periodic 
runs (yet).

I have successfully run puppet agent -t first time, signed the cert, and 
run it a second time to pull over stdlib and other modules on agents 
running RHEL 6 and Solaris 10u10 x86.

But I'm getting this timeout on a Solaris 10u10 box running on a T4-1 SPARC 
system.

This was my third run:

 # date;puppet agent -t;date
Wed Sep  4 14:12:05 CDT 2013
Info: Retrieving plugin
Notice: /File[/var/lib/puppet/lib/puppet/parser/functions/count.rb]/ensure: 
defined content as '{md5}9eb74eccd93e2b3c87fd5ea14e329eba'
Notice: 
/File[/var/lib/puppet/lib/puppet/parser/functions/validate_bool.rb]/ensure: 
defined content as '{md5}4ddffdf5954b15863d18f392950b88f4'
Notice: 
/File[/var/lib/puppet/lib/puppet/parser/functions/get_module_path.rb]/ensure: 
defined content as '{md5}d4bf50da25c0b98d26b75354fa1bcc45'
Notice: 
/File[/var/lib/puppet/lib/puppet/parser/functions/is_ip_address.rb]/ensure: 
defined content as '{md5}a714a736c1560e8739aaacd9030cca00'
Error: 
/File[/var/lib/puppet/lib/puppet/parser/functions/is_numeric.rb]/ensure: 
change from absent to file failed: execution expired
Error: Could not retrieve plugin: execution expired
Info: Caching catalog for AGENT
Info: Applying configuration version '1378322110'
Notice: Finished catalog run in 0.11 seconds
Wed Sep  4 14:15:58 CDT 2013


Each time I've run it, I get about 10 or so files and then I get execution 
expired.

What I'd really like to see is whether I can increase the expiry timeout.


Some other details:  The master is RHEL 6 on a Sun/Oracle X4800, lots and 
lots of fast cores and memory.  I'm using Puppet Open Source. I'm using 
passenger.  I have no real modules other than some basic forge modules I've 
installed to start out with.

[root@MASTER audit]# cd /etc/puppet/modules
[root@MASTER modules]# ls
apache  concat  epel  firewall  inifile  passenger  puppet  puppetdb  ruby  
stdlib

I briefly disabled SELinux on the master, but saw no change in behavior.

I'm certain that the firewall is right because other agents have had no 
problems.  iptables IS enabled, however.

The master and the agent are on the same subnet, so I don't suspect a 
network performance issue directly.

On Solaris, because the vendor supplied OpenSSL is antique and doesn't 
include SHA256, we have built our own OpenSSL and our own Ruby using that 
OpenSSL Library.  Even though SPARC is a 64 bit architecture, Ruby seems to 
default to a 32 bit build, so we built OpenSSL as 32 bit as well to match.  
I've got an open question to the guy responsible for that to see how hard 
it would be to try to build Ruby as 64 bit, that's likely a next test.

I have not yet run snoop on the communication to see what's going on the 
network side, but as I say I don't really expect the network to be the 
problem, between being on the same subnet and success on other systems with 
higher clock speeds.

Any pointers to other possible causes or somewhere I can (even temporarily) 
increase the timeout would be appreciated.



On Thursday, August 8, 2013 8:56:33 AM UTC-5, jcbollinger wrote:



 On Wednesday, August 7, 2013 11:46:06 AM UTC-5, Cesar Covarrubias wrote:

 I am already using Passenger. My master is still being minimally 
 utilized, as I'm just now beginning the deployment process. In terms of 
 specs, it is running 4 cores and 8GB of mem and 4GB of swap. During a run, 
 the total system usage is no more than 2GB and no swap. No network 
 congestion and I/O is low on the SAN which these VMs use. 

 The odd thing is once the hosts get all the libs sync'd, performance is 
 fine on further changes. It's quite perplexing. 


 To be certain that contention by multiple Puppet clients does not 
 contribute to the issue, ensure that the problem still occurs when only one 
 client attempts to sync at a time.  If it does, then the issue probably has 
 something to do with the pattern of communication between client and 
 master, for that's the main thing that differs between an initial run and 
 subsequent ones.

 During the initial plugin sync, the master delivers a moderately large 
 number of small files to the client, whereas on subsequent runs it usually 
 delivers only a catalog, and perhaps, later, 'source'd Files declared in 
 your manifests.  There may be a separate connection established between 
 client and master for each synced file, and anything that might slow that 
 down could contribute to the problem.

 For instance, if a firewall on client, master, or any device between makes 
 it slow or unreliable to establish connections; if multiple clients are 
 configured with the same IP number; if a router anywhere along the network 
 path is marginal; if a leg of the path is wireless and subject to 
 substantial radio interference; if any part of your network is suffering 
 from a denial-of-service attack; 

Re: [Puppet Users] Re: Puppet first run timing out

2013-09-04 Thread Pete Hartman
Cool, thank you very much for the information.

On Wed, Sep 4, 2013 at 6:56 PM, Cory Stoker cory.sto...@gmail.com wrote:
 We have lots of puppet clients on crappy bandwidth that would time out
 like this as well.  The option we changed to fix this is:

 #Specify the timeout to wait for catalog in seconds
 configtimeout = 600

 The default time is like 60 or 120 secs.  Another thing you should do
 is check out the logs of the web server if you are using passenger.
 You should see a ton of GET requests when you need to sync plugins.
 To force your puppet agent to redownload stuff remove the $vardir/lib
 directory on the agent.


 On Wed, Sep 4, 2013 at 1:48 PM, Pete Hartman pete.hart...@gmail.com wrote:
 I'm having a similar problem.

 I know for a fact that I am not contending with other agents, because this
 is in a lab environment and none of my agents is scheduled for periodic runs
 (yet).

 I have successfully run puppet agent -t first time, signed the cert, and run
 it a second time to pull over stdlib and other modules on agents running
 RHEL 6 and Solaris 10u10 x86.

 But I'm getting this timeout on a Solaris 10u10 box running on a T4-1 SPARC
 system.

 This was my third run:

  # date;puppet agent -t;date
 Wed Sep  4 14:12:05 CDT 2013
 Info: Retrieving plugin
 Notice: /File[/var/lib/puppet/lib/puppet/parser/functions/count.rb]/ensure:
 defined content as '{md5}9eb74eccd93e2b3c87fd5ea14e329eba'
 Notice:
 /File[/var/lib/puppet/lib/puppet/parser/functions/validate_bool.rb]/ensure:
 defined content as '{md5}4ddffdf5954b15863d18f392950b88f4'
 Notice:
 /File[/var/lib/puppet/lib/puppet/parser/functions/get_module_path.rb]/ensure:
 defined content as '{md5}d4bf50da25c0b98d26b75354fa1bcc45'
 Notice:
 /File[/var/lib/puppet/lib/puppet/parser/functions/is_ip_address.rb]/ensure:
 defined content as '{md5}a714a736c1560e8739aaacd9030cca00'
 Error:
 /File[/var/lib/puppet/lib/puppet/parser/functions/is_numeric.rb]/ensure:
 change from absent to file failed: execution expired

 Error: Could not retrieve plugin: execution expired
 Info: Caching catalog for AGENT
 Info: Applying configuration version '1378322110'
 Notice: Finished catalog run in 0.11 seconds
 Wed Sep  4 14:15:58 CDT 2013


 Each time I've run it, I get about 10 or so files and then I get execution
 expired.

 What I'd really like to see is whether I can increase the expiry timeout.


 Some other details:  The master is RHEL 6 on a Sun/Oracle X4800, lots and
 lots of fast cores and memory.  I'm using Puppet Open Source. I'm using
 passenger.  I have no real modules other than some basic forge modules I've
 installed to start out with.

 [root@MASTER audit]# cd /etc/puppet/modules
 [root@MASTER modules]# ls
 apache  concat  epel  firewall  inifile  passenger  puppet  puppetdb  ruby
 stdlib

 I briefly disabled SELinux on the master, but saw no change in behavior.

 I'm certain that the firewall is right because other agents have had no
 problems.  iptables IS enabled, however.

 The master and the agent are on the same subnet, so I don't suspect a
 network performance issue directly.

 On Solaris, because the vendor supplied OpenSSL is antique and doesn't
 include SHA256, we have built our own OpenSSL and our own Ruby using that
 OpenSSL Library.  Even though SPARC is a 64 bit architecture, Ruby seems to
 default to a 32 bit build, so we built OpenSSL as 32 bit as well to match.
 I've got an open question to the guy responsible for that to see how hard it
 would be to try to build Ruby as 64 bit, that's likely a next test.

 I have not yet run snoop on the communication to see what's going on the
 network side, but as I say I don't really expect the network to be the
 problem, between being on the same subnet and success on other systems with
 higher clock speeds.

 Any pointers to other possible causes or somewhere I can (even temporarily)
 increase the timeout would be appreciated.




 On Thursday, August 8, 2013 8:56:33 AM UTC-5, jcbollinger wrote:



 On Wednesday, August 7, 2013 11:46:06 AM UTC-5, Cesar Covarrubias wrote:

 I am already using Passenger. My master is still being minimally
 utilized, as I'm just now beginning the deployment process. In terms of
 specs, it is running 4 cores and 8GB of mem and 4GB of swap. During a run,
 the total system usage is no more than 2GB and no swap. No network
 congestion and I/O is low on the SAN which these VMs use.

 The odd thing is once the hosts get all the libs sync'd, performance is
 fine on further changes. It's quite perplexing.


 To be certain that contention by multiple Puppet clients does not
 contribute to the issue, ensure that the problem still occurs when only one
 client attempts to sync at a time.  If it does, then the issue probably has
 something to do with the pattern of communication between client and master,
 for that's the main thing that differs between an initial run and subsequent
 ones.

 During the initial plugin sync, the master delivers a moderately large
 number