Hi Kirk,
Sorry, I've only just noticed this thread. I've also experienced the same
problem. I've commented on this issue (although it might not be the correct
bug after all, since it's not quite the same problem):
https://projects.puppetlabs.com/issues/18812#note-1
It would be great to disable
Josh, thanks for the info. Based on your description, I think I was seeing
a bug. Because the agents were all definitely getting certificates. When
I did the tcpdump, I could see them being used in the exchange. So, it
sounds like the puppetmaster running in webrick was still performing a
r
Hi Kirk
On Thu, Jan 10, 2013 at 7:58 AM, Kirk Steffensen
wrote:
> SOLVED: I didn't have the VMs in the host's /etc/hosts. Once I put all of
> them into the host (i.e, the Puppet Master), then everything sped up to full
> speed on the clients. (I had been thinking it was a problem on the client
Huh, good find. I'm guessing you're still running webrick? Based on a
quick google it looks like it by does reverse DNS by default. Most web
servers have this disabled by default but I think webrick does not
(Apache at least disabled dns lookups I'm sure by default).
I think if Puppet passes DoNot
SOLVED: I didn't have the VMs in the host's /etc/hosts. Once I put all of
them into the host (i.e, the Puppet Master), then everything sped up to
full speed on the clients. (I had been thinking it was a problem on the
client side, so all of my troubleshooting had been isolated to the clients.
Josh,
use_srv_records is not set in puppet.conf. 'puppet config print
use_srv_records" shows it set to the default of false.
I ran tcpdump from inside the Vagrant VM during pluginsync. On eth1, where
the VM is connecting to the puppet master running on the host, the only
calls are puppet ca
Hi Kirk,
Do you happen to have SRV lookups enabled via the `use_srv_records`
setting? You may want to run tcpdump and look for extraneous DNS
lookups.
Josh
On Wed, Jan 9, 2013 at 2:00 PM, Kirk Steffensen
wrote:
> Here is the strace output from one of the 10-second periods while waiting
> for th
Here is the strace output from one of the 10-second periods while waiting
for the File notice to appear. https://gist.github.com/4497263
The strace output came in two bursts during this 10-seconds.
The thing that leaps out at me is that of the 4061 lines of output, 3754 of
them are rt_sigpro
> Ken, thanks. Unfortunately, (from a troubleshooting standpoint), it only
> took one or two seconds to sync stdlib on the local box.
>
> rm -rf /var/lib/puppet/lib/*
> puppet agent --test
>
> I saw the same stream of File notices, but they streamed by in real time,
> instead of taking 10 seconds
Ken, thanks. Unfortunately, (from a troubleshooting standpoint), it only
took one or two seconds to sync stdlib on the local box.
rm -rf /var/lib/puppet/lib/*
puppet agent --test
I saw the same stream of File notices, but they streamed by in real time,
instead of taking 10 seconds per notice.
Damn, I thought John had it :-(.
Here's a question I hadn't asked - whats your plugin sync performance
like on the puppetmaster node itself? ie. clear all synced files and
run it locally and time it, comparing to the other nodes.
On Wed, Jan 9, 2013 at 4:02 PM, Kirk Steffensen
wrote:
> John, I
John, I don't believe there is any name resolution issue. Both the CentOS
and Ubuntu base boxes have "192.168.33.1 puppet" in their /etc/hosts. From
inside the Vagrant VM, a ping to puppet responds immediately, and a
tracepath to puppet returns as quickly as I can hit enter, showing
sub-milli
On Monday, January 7, 2013 3:03:06 PM UTC-6, Kirk Steffensen wrote:
>
> The second time it runs (with pluginsync enabled), it only pauses at
> the "Info: Retrieving plugin" notice for a few seconds. So, it sounds like
> the md5sum is not the bottleneck.
Is it possible that you have a name re
The second time it runs (with pluginsync enabled), it only pauses at
the "Info: Retrieving plugin" notice for a few seconds. So, it sounds like
the md5sum is not the bottleneck.
The specific setup that I'm using is a pretty beefy Xeon workstation with
24 GB of RAM acting as the puppetmaster fo
There are two primary bottlenecks for pluginsync:
* the md5 sum it needs to do on both ends to compare and see if a
resync is needed
* the transfer itself
If you run puppet a second time, with pluginsync enabled (but this
time nothing should be transferred) - how much time does it take
compared t
Ken,
Thanks. I agree with your gut feeling, but after running the first round
of tests you suggested, I don't think that's it. Bandwidth is good and
there are no netstat errors. (Test results at the end of this email.)
Actually, I just realized that the Puppet client on the Ubuntu box is
ru
My immediate gut feeling on this would be network not Puppet being the
issue actually, especially if another client is having success at
doing the sync.
Its a virt so it could be hypervisor drivers or some other issue, its
an old version of the kernel as well - its more likely to happen -
although
Hi,
I have a fresh CentOS 5.8 Vagrant VM that I'm using to emulate a customer's
server. During the first Puppet run, it takes 13 minutes and 48 seconds to
sync the Puppet Labs stdlib module. On a similar Ubuntu 12.04.1 Vagrant
VM, Puppet starts up, and almost instantly goes from plugin sync t
18 matches
Mail list logo