Excuse my newbieness, but I'm having a basic misunderstanding regarding
loops.
Say I have: joesfriends = jack, sam, sally
I need to add each entry into a file - one per line.
$joesfriends.each | String $joesfriends| {# loop
file { "/etc/list_of_joes_friends":
line => "${joesfrie
Hello Puppet users,
I'm working on a webserver module that needs to ensure directories for
document roots. I tried doing this by using file resources like so:
file { 'vhost-A':
ensure => 'directory',
path => '/var/www/sharedvhost',
...clipped for brevity...
}
file { 'vhost-B':
ensur
Hi Mike, glad to hear I'm not the only one with the headaches :)
We are planning on upgrading everything across the board from puppet
server, puppet agent and puppet db. At least now I can get back to
planning that!
Our catalog/resource duplication is zero after about 5 hours of running. I
n
I am in a similar boat with super low duplication rates. Check my post
earlier to see what someone suggested I try. Basically, do a bunch of runs
on the same agent, then pull the needed data from the API and compare the
files.
On Thursday, July 6, 2017 at 11:54:37 AM UTC-5, Peter Krawetzky wr
We had a similar issue and Wyatt helped me out there with huge os, mount,
partition facts (I still owe you a beer or two) destroying my DB. I have
dropped my DB several times to "clean" things up. We don't need the data,
we just use it with Puppetboard to see things happening or not. I also h
I'm seeing a lot of replace facts in the puppetdb server log. I googled
but can't find anything solid.
Is there a way to compare facts for a node between runs? Our agents run
hourly. We are using open source PuppetDB 3.0.2.
Thanks.
--
You received this message because you are subscribed to
Well after several attempts at tuning the DB config and puppetdb config, we
had to drop the postgresql database and recreate it then allowing puppetdb
to create the required tables, indexes, etc. Now the command queue is
going between zero and four, processed tens of thousands of queue commands