On Mon, Oct 1, 2012 at 12:13 PM, jcbollinger <john.bollin...@stjude.org>wrote:

>
> On Saturday, September 29, 2012 12:03:33 AM UTC-5, Jeremy wrote:
>>
>>
>> On Fri, Sep 28, 2012 at 5:37 PM, jcbollinger <john.bo...@stjude.org>wrote:
>>
>>> [...]
>>>
>>> How big are the real deployment files?  I wouldn't think that parsing
>>> and processing even moderately large YAML files would be prohibitively
>>> expensive in itself, especially when compared to the work the master must
>>> perform to compile all the DSL code.  In any case, you should be able to
>>> test that against real data by wrapping a test harness around the innards
>>> of your function.
>>>
>>>
>> Looking at the report metrics I can see that successful runs show config
>> retrieval taking up to 130 seconds but most common is around 110 seconds so
>> not much difference. When it fails it usually fails with a "Could not
>> retrieve catalog from remote server: execution expired" and a "Could not
>> retrieve catalog; skipping run" error messages and then proceeds with the
>> cached catalog. Currently the catalog has 370-390 resources defined with a
>> change usually involving 170-180 resources.
>>
>>
> 370-390 resources is not unreasonably large.  It's somewhat surprising
> that so many changes happen each run (after the first), but that doesn't
> factor into catalog compilation time.
>
> The timings you report are potentially important, however, because they're
> running right about at the default client-side timeout for catalog requests
> (120s).  You could try setting the "configtimeout" configuration parameter
> to something a bit larger, say 150 (in the agent section).  That doesn't
> answer the question of what is causing compilation to take that long, but
> it probably gets you a lot fewer timeouts.
>
>
I've taken the suggestion and increased the agent configtimeout on the
client machines to see if this helps decrease the execution timeouts that
the engineer is seeing and complaining about.

I still maintain that loading a file over the network is a pretty likely
> performance-killer.  I/O is in general far, far slower than computation,
> and network I/O is typically both slower and less consistent than local
> I/O.  As with anything performance-related, however, there is no
> alternative to testing for determining reliable performance characteristics.
>
>
I'm working on a process to retrieve the deployment configuration file from
the S3 bucket outside of Puppet control so I can process it locally and see
if that improves the config generation time.


> You may also want to check whether your master is under-resourced.  The
> master typically consumes 100s of MB, and if it has to swap parts of that
> back and forth between physical and virtual memory then that will slow
> everything down.  Also, if you're using the built-in "webrick" server then
> you should be aware that it doesn't scale especially well, especially for
> medium-large catalogs.  It is single-threaded, so if two nodes request
> catalogs at the same time, then one has to wait for the master to serve the
> other first.  The usual advice for that situation is to run the master via
> passenger.
>

This is a relatively small installation with only a handfull of clients.
Still the master is running Apache with Passenger instead of Webrick and
utilizing async queuing.


>
>

> John
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/puppet-users/-/QHeykExDSRIJ.
>
> To post to this group, send email to puppet-users@googlegroups.com.
> To unsubscribe from this group, send email to
> puppet-users+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/puppet-users?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to