[Puppet Users] Access to hiera repository
Hello everyone, I am currently working in a Linux team that decided to use Puppet as a configuration management tool and we developed a couple of own modules, use a lot from the forge and we keep hiera data in a separate git repository (tools: r10k+controlrepo, one separate hiera repo not managed by r10k, gitlabs server to manage all git repos) The IT department is quite big and has different silos (e.g VMWare team, Linux team, Backup team, Storage team, etc) but we (meaning the linux team) want to use puppet to replace workflows that beforehand went through different departments, e.g. to configure backup for a new machine, the backup team had to create a node in their backup tool and than give us the necessary input to generate the correct configuration file on the new server. Ideally I would like them to manage the data in hiera the same way as we do, so they can leverage the hierarchy to define defaults on a subnet level, host level, etc. but on the otherhand access to the single hiera repo would allow them to basically reconfigure everything on a server (like adding data for the sudo module to add custom sudo rules). Even though this would be tracked through git logs, a lot of my collegues are not comfortable with that (and might even be against internal regulations) so I am wondering how you manage the fact when a lot of different teams with different knowledge about puppet, yaml, and git should contribute to hiera but should only manage stuff they care about/are responsible for. - Stefan -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/56B12FDC.8090801%40taunusstein.net. For more options, visit https://groups.google.com/d/optout.
[Puppet Users] Re: Deploy bacula with puppet and foreman
As far as I see I have to call bacula::director::client somehow as it defines the client-config. But the question is: How can I do that? Am Dienstag, 2. Februar 2016 17:11:10 UTC+1 schrieb Timotheus Titus: > > Hello, > > I'm using https://github.com/netmanagers/puppet-bacula to deploy bacula > to a backup-system and different clients. > > The installation of the bacula-director ("main-server") and the > bacula-storage ("hdd-handler") is running fine, but if I add a node as > server and one as client I do not get an export for my config. > > For example: > > On the director-node there are different directories and files like > > ├── bacula-dir.conf > ├── bacula-fd.conf > ├── bacula-sd.conf > ├── bconsole.conf > ├── clients.d > ├── director.d > └── storage.d > > Now for each client there should be a file called "clientxy.conf" in > "clients.d". The files "bacula-sd.conf", "bconsole.conf" and > "bacula-fd.conf" are generated fine - but the client is not generated. > > I found a template for the clients in the module - it is located in > "templates/director/client.conf.erb" but I do not find an attribute where I > could insert this template. > > > This is the YAML of my bacula-director-server: > > bacula: > client_template: bacula/bacula-fd.conf (the title is not correct - > it is the configuration file on the client itself, not the client-template > for the director) > console_template: bacula/bconsole.conf.erb > default_messages: Daemon > director_template: bacula/bacula-dir.conf.erb > manage_client: 'false' > manage_console: 'true' > manage_director: 'true' > manage_storage: 'true' > source_dir_purge: 'true' > storage_template: bacula/bacula-sd.conf.erb > > This is the YAML of my bacula-client-server: > > > bacula: > client_template: bacula/bacula-fd.conf > console_template: bacula/bconsole.conf.erb > default_messages: Daemon > director_template: bacula/bacula-dir.conf.erb > manage_client: 'true' > manage_console: 'false' > manage_director: 'false' > manage_storage: 'false' > source_dir_purge: 'false' > storage_template: bacula/bacula-sd.conf.erb > > Any ideas how to solve this and add a node automatically to bacula-dir? > -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/547b091d-27fc-40f4-ab85-c6fc5e23c7d2%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[Puppet Users] Deploy bacula with puppet and foreman
Hello, I'm using https://github.com/netmanagers/puppet-bacula to deploy bacula to a backup-system and different clients. The installation of the bacula-director ("main-server") and the bacula-storage ("hdd-handler") is running fine, but if I add a node as server and one as client I do not get an export for my config. For example: On the director-node there are different directories and files like ├── bacula-dir.conf ├── bacula-fd.conf ├── bacula-sd.conf ├── bconsole.conf ├── clients.d ├── director.d └── storage.d Now for each client there should be a file called "clientxy.conf" in "clients.d". The files "bacula-sd.conf", "bconsole.conf" and "bacula-fd.conf" are generated fine - but the client is not generated. I found a template for the clients in the module - it is located in "templates/director/client.conf.erb" but I do not find an attribute where I could insert this template. This is the YAML of my bacula-director-server: bacula: client_template: bacula/bacula-fd.conf (the title is not correct - it is the configuration file on the client itself, not the client-template for the director) console_template: bacula/bconsole.conf.erb default_messages: Daemon director_template: bacula/bacula-dir.conf.erb manage_client: 'false' manage_console: 'true' manage_director: 'true' manage_storage: 'true' source_dir_purge: 'true' storage_template: bacula/bacula-sd.conf.erb This is the YAML of my bacula-client-server: bacula: client_template: bacula/bacula-fd.conf console_template: bacula/bconsole.conf.erb default_messages: Daemon director_template: bacula/bacula-dir.conf.erb manage_client: 'true' manage_console: 'false' manage_director: 'false' manage_storage: 'false' source_dir_purge: 'false' storage_template: bacula/bacula-sd.conf.erb Any ideas how to solve this and add a node automatically to bacula-dir? -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/519249f4-7f7c-4c49-95f6-501aa82379ae%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[Puppet Users] scheduling a Git repo sync on PE master
Hi everyone quick question, Im trying to update some files in my one of my modules/nginx/files based on a file located in some remote repo. My nginx module is distributing a HTML file to all managed nodes, and I need to make sure this HTML file is the latest commit from the remote repo. Im pulling this HTML file from Git remote into my nginx/files/repo folder, and from there, serving the HTML file to my managed nodes. I installed the vcsrepo module on my PE master but trying to decide a good way to schedule a periodic pull from the remote repo to my nginx/files directory. I have a pull_repo.pp vcsrepo { '/etc/puppetlabs/code/environments/production/modules/nginx/files/repo': ensure => latest, provider => git, source => 'https://github.com/puppetlabs/exercise-webpage.git', revision => 'master', force => true, } This works and pulls in the freshest HTML file each time. Whats a good way of scheduling this to run on my PE master? Should I setup a regular cron job 'crontab -e' and have it do 'puppet apply pull_repo.pp' or is there a more recomended method to run something scheduled on PE master? -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/357d8208-08b8-4a24-85da-eba24f27a246%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[Puppet Users] Re: Error : Could not find default node or by name with 'xxxxx.domain.local, xxxxx.domain, xxxxx' on node xxxxx.domain.local
On Monday, February 1, 2016 at 10:27:58 AM UTC-6, Olivier Lemoine wrote: > After, i want to change the environment of my new node to "homologation", > so i change it in "/etc/puppet/puppet.conf" on stestsles03.local. > > And when i run again "puppet agent --test" i have this error : > > stestsles03:~ # puppet agent --test > err: Could not retrieve catalog from remote server: Error 400 on SERVER: > Could not find default node or by name with 'stestsles03.domain.local, > stestsles03.domain, stestsles03' on node stestsles03.domain.local > warning: Not using cache on failed catalog > err: Could not retrieve catalog; skipping run > > The master is complaining that it cannot match any node block to the node's identifier (which by default is its hostname, apparently "stestsles03.domain.local"). It tries the whole hostname, and it tries each nonempty substring it can construct by removing one name segment, and finally it tries to fall back to a default node block; none of these is present in the new environment. I have testing this : > > - Cleaning node on the puppet master : "puppet cert --clean > stestsles03.domain.local", delete ssl directory in "/var/lib/puppet" and > execute agent to create new ssl certificate (I cut/copy my node > decalaration (manifests) from developpement environnement to homologation > environnement) > > You should not need to manipulate the node's certificate to move it between environments, but if the new environment contains any node blocks at all then it must be able to match the node to at least one of them. Perhaps the thing to do is to copy the appropriate node block into a manifest somewhere in the new environment's site manifest directory, but if you are intentionally avoiding default node blocks then it is by no means clear that whatever node declaration matches your node in its original environment would still be appropriate for it in its new environment. John -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/09a91efa-b061-448d-a301-46e162a28ef7%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [Puppet Users] How to run shell command on puppet agent
On Monday, February 1, 2016 at 3:16:16 PM UTC-6, Sans wrote: > > I agree with Steve comment but the point you guys are missing here it's a > multi-tenant system and I'm getting hard time to visualize how a custom > fact will handle the situation as the the same command will return > different result for different databases, based on the client. > > Well yes, it's easy to miss something that you never mention. Multi-tenancy is not an issue of any particular significance here, but if in the same run you want to evaluate that template multiple times with different values of the interpolated variables (i.e. in each of several instances of some defined type), or more generally, if it would be hard for the agent to know for itself which values to use, then that does bring in some additional considerations. You might still be able to implement it as a custom fact, maybe using a structured fact value such as a hash keyed on DB name, but that's probably a long shot. The *other* way to run your own shell (or any other) code on the target node during a catalog run is via the provider for a resource that is being applied. You might think first of the Exec type, which is built around executing commands on the target node, but that type doesn't allow you to capture the command output for any purpose other than logging. To capture the output and do anything with it, you must either encapsulate the whole process in a script or other program and Exec that, or you must write a custom type and provider that implement your objective. In no event do approaches such as these, that work during catalog application, provide any data back to the master beyond what goes into the agent's report. > It does make difference in the sense as /etc/passwd is unique for given > puppet agent but it's not when you running, e.g. ls ~/Documents|wc -l for > different users on the same puppet agent. How you will do the with a custom > fact? I'm glad to be educated. > If each of several distinct users is running the agent under his own identity, then the agent will report a possibly-different set of facts for each user. This is why multi-tenancy is not, in itself, a significant issue. Indeed, it would be best if each user were identified to the master via a distinct cert with, therefore, a distinct name. In that case, the master doesn't see it any differently than multiple separate machines. If instead the different tenants share a cert then they can relatively easily steal configuration data from each other, which might include sensitive and/or confidential information. It is no special problem if the agent provides different fact values on different runs, even in the case that the tenants share a cert. The fact-based approach presents a problem only if the agent does not know all the details of the command to run. John -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/77309600-ea74-43a7-90b6-73a32a7a0c63%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [Puppet Users] New Resource Type for bareos
Hi Björn, do you have a your code on i.e. github so that I can take a look on it or is it restricted? I use Foreman for the backup definition, I have global smart parameters setting with file-fd to true and a configuration group backup-client. All hosts that belong to this configuration group get the client installed and activated. On the director host I add the clients I would like to backup. This result in a standard set of files, for upcoming versions I plan to make something like backup class gold, silver, bronze with different retention times and a scope type of thing for different filesets. But this is not yet implemented, not enough time. Maybe adding some functionality in the future like, there is a backup client installed so a basic, bronze set is scheduled in the backup might be a cool idea, but I guess it's not that easy to implement with focus on reusable modules and also with space management in mind. Regards Thomas 2016-01-27 17:06 GMT+01:00 Björn : > Hi Thomas, > > I got a hiera group of linux boxes. All of these should have the bareos > client and should backup a standard fileset for instance /var/log. > Okay, the backup client array on the master module would work, but if you > forget to add a backup client you got no backup. Now I'm dreaming from a > implementation through puppetdb like the nagios resource types. > Or a solution that all backup clients automaticly register at the server. > From my point of view the automic configuration is preferable, because a > missing backup can have the same impact as missing monitoring. > > May there are better solutions to handle it without puppetdb, I'm not > sure. > > Regards, > Björn > > Am Dienstag, 26. Januar 2016 16:48:52 UTC+1 schrieb thbe: >> >> Hi Björn, >> >> depends on how you would like to implement the fully automatic >> configuration. I do this on the server side because backup is nothing that >> apply out of the box after provisioning to the client and server. Under >> normal circumstances I would like to add a client only to the backup if >> really needed. Therefore I used an array in my module to specify the client: >> >> https://github.com/thbe/puppet-bareos >> >> The module is still v0.1.0, so it’s not yet feature complete and not >> released on the forge but works the way I need it. I think I’ll release it >> on the forge sometime in Q1/2016 when missing features are implemented. >> >> Regards Thomas >> >> Am 21.01.2016 um 15:23 schrieb Björn : >> >> Hello, >> >> I try to make the bareos puppet module ready for puppetdb and fully >> automatic configuration. >> >> When I understand correctly, I'll need a resource type to export it and >> bring the client configuration on the bareos server finally. >> >> $ cat bareos/lib/puppet/type/bareos_client.rb >> Puppet::Type.newtype(:bareos_client) do >> desc 'TEST' >> ensurable >> newparam(:name, :isnamevar => true) do >> desc "The name of the client." >> end >> end >> >> $ tail bareos/manifests/client.pp >> mode=> '0644', >> owner => 'bareos', >> group => 'bareos', >> } >> >> @@bareos_client{ $::hostname: >> } >> >> Bareos_client <<| |>> >> } >> >> I get this error when I make a puppet run on the client: >> Error: /Stage[main]/Bareos::Client/Bareos_client[PC3256CO]: Could not >> evaluate: No ability to determine if bareos_client exists >> /usr/lib/ruby/site_ruby/1.8/puppet/property/ensure.rb:85:in `retrieve' >> /usr/lib/ruby/site_ruby/1.8/puppet/type.rb:1048:in `retrieve' >> /usr/lib/ruby/site_ruby/1.8/puppet/type.rb:1076:in `retrieve_resource' >> /usr/lib/ruby/site_ruby/1.8/puppet/transaction/resource_harness.rb:236:in >> `from_resource' >> /usr/lib/ruby/site_ruby/1.8/puppet/transaction/resource_harness.rb:19:in ` >> evaluate' >> /usr/lib/ruby/site_ruby/1.8/puppet/transaction.rb:204:in `apply' >> /usr/lib/ruby/site_ruby/1.8/puppet/transaction.rb:217:in `eval_resource' >> /usr/lib/ruby/site_ruby/1.8/puppet/transaction.rb:147:in `call' >> /usr/lib/ruby/site_ruby/1.8/puppet/transaction.rb:147:in `evaluate' >> /usr/lib/ruby/site_ruby/1.8/puppet/util.rb:335:in `thinmark' >> /usr/lib/ruby/1.8/benchmark.rb:308:in `realtime' >> /usr/lib/ruby/site_ruby/1.8/puppet/util.rb:334:in `thinmark' >> /usr/lib/ruby/site_ruby/1.8/puppet/transaction.rb:147:in `evaluate' >> /usr/lib/ruby/site_ruby/1.8/puppet/graph/relationship_graph.rb:118:in ` >> traverse' >> /usr/lib/ruby/site_ruby/1.8/puppet/transaction.rb:138:in `evaluate' >> /usr/lib/ruby/site_ruby/1.8/puppet/resource/catalog.rb:169:in `apply' >> /usr/lib/ruby/site_ruby/1.8/puppet/util/log.rb:149:in `with_destination' >> /usr/lib/ruby/site_ruby/1.8/puppet/transaction/report.rb:112:in >> `as_logging_destination' >> /usr/lib/ruby/site_ruby/1.8/puppet/resource/catalog.rb:168:in `apply' >> /usr/lib/ruby/site_ruby/1.8/puppet/configurer.rb:120:in `apply_catalog' >> /usr/lib/ruby/site_ruby/1.8/puppet/util.rb:161:in `benchmark' >> /usr/lib/ruby/1.8/benchmark.rb:308:in `realtime' >> /usr/lib/ruby/site_ruby/1.8/puppet/uti
[Puppet Users] Re: Error : Could not find default node or by name with 'xxxxx.domain.local, xxxxx.domain, xxxxx' on node xxxxx.domain.local
You need to move/create/copy the manifest file to the correct environment folder when you move a node between environments which it seems that you haven't done Regards, Frederik Den mandag den 1. februar 2016 kl. 17.27.58 UTC+1 skrev Olivier Lemoine: > > Hello, > > I use Puppet for some weeks in mutli-environments mode (I have 4 > environments : "developpement", "homologation", "production" and "dmz") and > i still have the same problem when i want to switch node's environment. > > Example : > > I install and configure for the first time an agent. > > "/etc/puppet/puppet.conf" on "stestsles03" : > [main] > logdir=/var/log/puppet > vardir=/var/lib/puppet > ssldir=/var/lib/puppet/ssl > rundir=/var/run/puppet > factpath=$vardir/lib/facter > server=mypuppetmaster.domain.local > environment=developpement > > > I execute "puppet agent--test" on my node and i go to the puppetmaster to > sign SSL certificat. > I create in the manifests file my new node > ("/etc/puppet/environment/developpement/manifests/site.pp": > > node 'stestsles03.domain.local' { > include repos > include client_vtom > } > > and i return on my new node to execute "puppet agent --test" > > Result : OK > > After, i want to change the environment of my new node to "homologation", > so i change it in "/etc/puppet/puppet.conf" on stestsles03.local. > > And when i run again "puppet agent --test" i have this error : > > stestsles03:~ # puppet agent --test > err: Could not retrieve catalog from remote server: Error 400 on SERVER: > Could not find default node or by name with 'stestsles03.domain.local, > stestsles03.domain, stestsles03' on node stestsles03.domain.local > warning: Not using cache on failed catalog > err: Could not retrieve catalog; skipping run > > I have testing this : > > - Cleaning node on the puppet master : "puppet cert --clean > stestsles03.domain.local", delete ssl directory in "/var/lib/puppet" and > execute agent to create new ssl certificate (I cut/copy my node > decalaration (manifests) from developpement environnement to homologation > environnement) > > What is wrong ? > > Sorry for bad english :-) > > Best regards, > > Olivier > > > > > > -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/829d1006-726e-4212-bcc5-0b192a165997%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[Puppet Users] Re: Puppet file type - wrong selinux fcontext detected
Ok thank you. It looks like bug to me. Will try to reproduce it in the lab with latest agent. H.Karasek Dne pondělí 1. února 2016 21:31:13 UTC+1 Thomas Müller napsal(a): > > I've seen this if puppet agent service was already running when the > fcontext got added with semanage. Afterwards file resources applied the old > contexts. > > This behaviour could be reproduced for all puppet runs started from the > deamon. Puppet runs started from the shell with --test did apply the > correct context. > > Restarting the puppet daemon did fix the problem. > > - Thomas > > -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/04b41211-252b-4e74-b1f0-93788259c90a%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.