On Mon, Sep 12, 2011 at 10:50 AM, Ken Barber <[email protected]> wrote:
>>>> I'm launching puppet with 'service puppet restart'.
>
> Do you get any different results if you run it like:
>
> puppet agent -t
warning: You have configuration parameter $localconfig specified in
[puppetd], which is a deprecated section. I'm assuming you meant
[agent]
warning: You have configuration parameter $classfile specified in
[puppetd], which is a deprecated section. I'm assuming you meant
[agent]
warning: You have configuration parameter $report specified in
[puppetd], which is a deprecated section. I'm assuming you meant
[agent]
info: Retrieving plugin
info: Loading facts in release_ver
info: Loading facts in lb_status_setup
info: Loading facts in release_ver
info: Loading facts in lb_status_setup
info: Caching catalog for hproxy11.h.foo.com
info: Applying configuration version '1315739532'
notice: 0.25.5 on hproxy11.h.foo.com
notice: /Stage[main]/Puppet::Setup/Notify[0.25.5 on
hproxy11.h.foo.com]/message: defined 'message' as '0.25.5 on
hproxy11.h.foo.com'
notice: Finished catalog run in 22.78 seconds
>
> Instead of using that service?
>
> Are you absolutely certain there isn't a stray ruby process running
> your old 0.25 puppet agent?
[root@hproxy11 ~]# ps -ef | grep ruby
root 17832 1 4 10:47 ? 00:00:16 /usr/bin/ruby /usr/sbin/puppetd
root 21791 10329 0 10:53 pts/0 00:00:00 grep ruby
root 30729 1 0 Sep02 ? 00:00:01 ruby
/usr/sbin/mcollectived --pid=/var/run/mcollectived.pid
--config=/etc/mcollective/server.cfg
>
>> Well, I can't seem to work out what's going on. A 'gem list --local'
>> shows only stomp and I can't find any other libraries or binaries
>> installed anywhere after removing the puppet and facter RPM's. :(
>
> Can you post your puppet.conf from your client and server?
>From client:
# This file is managed by puppet
#
# Manual changes to this file will get overwritten
#
[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl
pluginsync = true
factpath = $vardir/lib/facter
[puppetd]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt
# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig
# Enable reports
report = true
>From server:
[main]
# Where Puppet stores dynamic and growing data.
# The default value is '/var/puppet'
vardir = /var/lib/puppet
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl
external_nodes = /etc/puppet/bin/getnode.sh
node_terminus = exec
[production]
manifest = /etc/puppet/manifests/site.pp
modulepath = /etc/puppet/modules
[agent]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt
# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig
[master]
# this is because of the shared vip between the 2 puppetmasters
certname=puppet
autosign=/etc/puppet/autosign.conf
# reporting
reportdir = /var/lib/puppet/reports/
reports = log,tagmail
syslogfacility = local7
>
> Are you able to replicate this problem on other nodes? So far we have
> only discussed a single node.
Yes. I am seeing this behaviour on a 2nd node as well.
> The node we have been discussing was 'upgraded' from 0.25 to 2.7
> wasn't it? What process did you go through to do the upgrade?
Correct.
On Client:
service puppet stop
yum clean all
rpm --erase puppet
rpm --erase facter
rm -fR /var/lib/puppet
yum upgrade puppet
On server:
puppetca --clean hproxy11.h.foo.com
On Client:
service puppet start
>
> I presume this problem doesn't exist with a brand new node that has
> 2.7 and has not been upgraded?
I don't know. I don't have access to a fresh 2.7 install on the server
right now.
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/puppet-users?hl=en.