On Jun 1, 2011, at 7:04 AM, pachanga wrote:

> Hi!
> 
> I'm incrementally developing puppet rules on the master server and
> applying them to the testing machine. For some reason puppet agent
> applies changes from the master very slowly, about 3 minutes every
> time. At this speed it's very tiresome to test the new rules...
> 
> I'm running puppet agent as follows:
> 
> #puppet agent --no-daemonize --verbose --onetime --summarize --debug --
> trace
> 
> Here is what it shows in the end:
> 
> Changes:
> Events:
> Resources:
>            Total: 46
> Time:
>       Filebucket: 0.00
>             Exec: 0.00
>         Schedule: 0.00
>   Ssh authorized key: 0.00
>            Group: 0.00
>          Package: 0.00
>             User: 0.01
>          Service: 0.07
>   Config retrieval: 12.25
>         Last run: 1306935632
>             File: 84.25
>            Total: 96.58
> 
> During the run most of the time is spent in the following phases:
> 
> ....
> info: Applying configuration version '1306935546'
> debug: file_metadata supports formats: b64_zlib_yaml marshal pson raw
> yaml; using pson
> debug: file_metadata supports formats: b64_zlib_yaml marshal pson raw
> yaml; using pson
> debug: file_metadata supports formats: b64_zlib_yaml marshal pson raw
> yaml; using pson
> debug: file_metadata supports formats: b64_zlib_yaml marshal pson raw
> yaml; using pson
> debug: file_metadata supports formats: b64_zlib_yaml marshal pson raw
> yaml; using pson
> debug: Service[sshd](provider=redhat): Executing '/sbin/service sshd
> status'
> debug: Puppet::Type::Service::ProviderRedhat: Executing '/sbin/
> chkconfig sshd'
> debug: file_metadata supports formats: b64_zlib_yaml marshal pson raw
> yaml; using pson
> debug: file_metadata supports formats: b64_zlib_yaml marshal pson raw
> yaml; using pson
> debug: file_metadata supports formats: b64_zlib_yaml marshal pson raw
> yaml; using pson
> debug: Finishing transaction 23816162721700
> 
> Any ideas why this can be happening? Thanks.

This probably means you are doing one of these (all of which are slow):
1) Using "File" for big file(s)
2) Using "File" with "recurse=true" to deploy lots of little files.
3) Using "File" with recurse=true" to deploy into a directory with lots of 
little files, even though you are only deploying very few.  ( purge=fasle )
4) Using "File" over a connection with high ping.

Of course, combining more than one is even worse.

Mitigations for 1 or 2:
1) Put your files in a package and let your package manager of choice handle it 
for you.
2) Use rsync with a cron job or rsync with an exec

Mitigation for 3:
1) Try "recurse=remote".

Mitigation for 4:
1) If the combined size of the files are small, you can try using "content" 
instead of "source" for the files.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to