On 10/30/14 4:32 PM, Georgi Todorov wrote:
> Chris, I sleep very well :). Our master is hourly backed up (the entire
> vm) and all configs go though git. Redeploying/restoring the master
> should be fairly quick (I have not tried though). Also, the way we use
> puppet, if it is down, it is no harm really. Only needed to push
> changes, which we don't do that often.
> 
> Ramin and Garrett, I was considering throwing more CPU at it, seeing how
> it is CPU bound, however the strace told me something else is a problem.
> And I finally solved it. The culprit was Ruby. Puppet agent runs used to
> take anywhere from 30 to 250 seconds depending on ... the weather? I'm
> guessing it depended on where in the queue they were. The VM cluster is
> not oversubscribed, and in fact I had the VM isolated on a single DL580
> host for testing, just to make sure nothing is interfering.  I ended up
> compiled ruby 2.1.4, installed all the gems needed for foreman (about
> 75), and now have both foreman and puppet master running on ruby 2.1.4.
> My load average on the machine is now ~9 (down from about 17), requests
> in queue stays at 0 almost all the time with the occasional "jump" to 20
> - nothing like my constantly full queue.
> 
> So, hopefully this would be helpful for anyone who is trying to run
> puppet master on CentOS. 
> 
> And thank you guys, I have actually read both of those links before and
> when we add the rest of our infra, if we start hitting a bottleneck,
> I'll split the master and increase the CPU count.
> 
> Cheers,
> Georgi
> 

Hi Georgi,

The catalog compilation time is how long it takes to compile the catalog
*on the master*. You can find it on CentOS with `grep Compile
/var/log/messages`. The amount of time it takes for your agent to run is
not at all tied with how long it takes to compile the catalog. Your
puppet agents are not talking to the puppet master once they have
received the catalog, except for file requests[1] and to submit a report.

If you are solving for long agent runs, check out the logs which include
timing information. A good visualization of this can be had with Puppet
Dashboard which will break down a run with times for each resource type.
Typically bottlenecks include exec, package, and service resources and
custom functions. Especially packages if you talk to the internet
instead of local mirrors.

By chance are you serving any large binary files with Puppet?

[1] -
http://4.bp.blogspot.com/-0xlYPWw61Hw/UpVulZU1qTI/AAAAAAAAAwY/egPhvnpn0jI/s1600/puppet_technical_flow.jpg

Best regards,
-g

-- 
Garrett Honeycutt
@learnpuppet
Puppet Training with LearnPuppet.com
Mobile: +1.206.414.8658

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/54530840.7050707%40garretthoneycutt.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to