John Philips wrote:
But now I'm confused again...
You say that this is possible and may even be implemented soon:
package { 'foobar':
ensure = installed,
onlyif = 'test -f /foo/bar',
}
But something like this is pretty much out-of-the-question?
$foo_exists = client_exec('test
John Philips wrote:
Peter, sorry if it appears that I'm using you as a target, but
you just put a big bullseye on yourself :-) If I understand
correctly, you suggest running puppet individually on every
single host and having the hosts query themselves, i.e. no
central puppetmaster? So,
Disconnect top-posted thus:
ISTR trying that and having the same problems.
User depends on class ldap-users. (So in theory, anything that needs a
user will require that.) Puppetd bails after grabbing the config with
cannot find user joe because ldap isn't set up. (Same using tags.)
Users
Both scenarios involve a decision based on output from the client.
Based on output from client, but in totally different ways. When the
client requests a configuration, it sends all known facts to the server.
The server then computes the list of resources (evaluating functions
like include)
declare -x LANG=en_US
But probably it has something to do with me being located in Slovenia:
sl_SI.
b.
On 28 okt., 17:01, Philip phil2...@gmail.com wrote:
Thank you for this detailed Report. It seems, that i have a error by
translating date and time into timestamps. What are your locale
Did exactly that. Got:
r...@server19:~# puppetd --test --tags=bootstrap
info: Loading fact dmidecode
...
info: mount[modules]: Mounted
info: mount[plugins]: Mounted
err: Could not create apt-da...@puppet: user apt-dater doesn't exist
warning: Not using cache on failed catalog
warning:
Hi
Today I tried to configure puppet for failover. I would like to have
two puppet masters, one active and the other not active. Then I would
migrate the IP address, and puppet would become active on the other
node. Configuration I would like to put on a NAS share.
Do you think this is
On Thu, Oct 29, 2009 at 11:49 AM, Rene rene.zbin...@gmail.com wrote:
Hi
Today I tried to configure puppet for failover. I would like to have
two puppet masters, one active and the other not active. Then I would
migrate the IP address, and puppet would become active on the other
node.
On Thu, Oct 29, 2009 at 3:27 PM, Nigel Kersten nig...@google.com wrote:
On Thu, Oct 29, 2009 at 11:49 AM, Rene rene.zbin...@gmail.com wrote:
Hi
Today I tried to configure puppet for failover. I would like to have
two puppet masters, one active and the other not active. Then I would
This will definitely work, I have this setup : two puppetmasters, sharing a
vip with heartbeat, both running nginx + mongrel. /etc/puppet is populated
through subversion (automatic checkout). /var/lib/puppet is NFS mounted (SPOF,
could be an iSCSI disk with ocfs2 filesystem). This works
I'm trying to have a exec dependency on a service object that would keep
it from being restarted if the exec fails. Unfortunately, the service
gets refreshed regardless whenever the exec is run, failure or not.
I've tried various combinations of subscribe/require/notify, but can't
find an
Hi
I'm trying to have a exec dependency on a service object that would keep
it from being restarted if the exec fails. Unfortunately, the service
gets refreshed regardless whenever the exec is run, failure or not.
I've tried various combinations of subscribe/require/notify, but can't
Not to say You're doing this all wrong!, but wouldn't this best be
handled in the init script? You can put it under puppet control.
On Oct 29, 2009, at 3:39 PM, Jason Lavoie wrote:
I'm trying to have a exec dependency on a service object that would
keep
it from being restarted if the
I need to do something like this
node foo {
$iface = pcn0
...
}
# the template
%= network_%= iface % %/% netmask_%= iface % %
The idea being that a file would be built on the client holding the
value of pcn0's network/netmask
Of course, I'm trying to avoid hard coding the interface name
We just have 3 puppet masters and sync the configs from one master-master to
2 master-slaves.
Works very well. We have certs setup so a client can connect to any of the
masters.
--
Brian Akins
--~--~-~--~~~---~--~~
You received this message because you are
Has anyone used augeas to manage the dhcpd.conf file? I really don't
want to have a series of .d directories to build this thing. Each
subnet needs to be a resource and each static host entry inside the
subnet needs to be its own resource.
On 10/29, Carl Caum wrote:
Not to say You're doing this all wrong!, but wouldn't this best be
handled in the init script? You can put it under puppet control.
I _knew_ someone would say that. :) In fact, for this particular
example I gave, the Debian init scripts already do the config
so is the client 25 and the server 24? I don't think that will work.
---
Thanks,
Allan Marcus
505-667-5666
On Oct 27, 2009, at 1:41 PM, Jason Antman wrote:
I tried updating one of my clients today from 0.24.8 to the Git head
from 2009-10-23. Unfortunately, I didn't know how imminent the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Allan Marcus wrote:
puppetd --version
will not return 'rc' version. for example, it returns 0.25.1 for rc2.
It doesn't because of an issue we've had with Gems. Gems only
support versioning of x.y.z. which means we can't return a version
with
I've done a similar thing in my redhat network template, e.g.:
DEVICE=%= device %
BOOTPROTO=static
IPADDR=%= eval(ipaddress_ + device) %
NETMASK=%= eval(netmask_ + device) %
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
HWADDR=%= eval( macaddress_ + device) %
GATEWAY=%= gateway %
Hopefully this helps,
20 matches
Mail list logo