Re: [Puppet Users] Monitor puppet runs on clients with nagios
Le jeudi 11 novembre 2010 à 06:09 -0800, Tim a écrit : > Hi, Hello, > Anyway what other approaches are there? I'd like to simply see 2 > things: > 1) If there were any failures during the puppet run on the client > 2) When the last puppet run on each client was (ie. if it was more > than 50 mins ago raise a warning) I check point 2 with the help of mcollective and its puppetd agent. See http://www.rottenbytes.info/?p=387 for more information. Regards, Nico. signature.asc Description: Ceci est une partie de message numériquement signée
Re: [Puppet Users] Re: agent needs to make two runs before master compiles new catalog
Hi Kent, Thanks for bringing this issue to our attention. I have recreated and filed the following ticket: http://projects.puppetlabs.com/issues/5318 Feel free to watch the ticket to track the progress of the issue. -Dan -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] Same Service in Different Classes
Eric Sorenson writes: > On Nov 15, 2010, at 9:53 PM, Daniel Pittman wrote: > >> Otherwise, we use code review before committing to the central repository >> to help reduce the risk of issues - now *two* operations people need to be >> ignorant of class C for the issue to pop up. > > Hi Daniel - it's a bit off-topic from the original question but would you > mind explaining how your code review works in more detail? I'm curious > about both the technology and people process involved. Ah. Um, pretty light-weight: we all work within about ten feet of each other, so the process is simple and "social" rather than technical: We use Subversion as our central repository. When someone is ready to commit we grab another member of the team and ask them to review the code. If that brings up issues that need to be addressed then they get reworked and someone else (often the same reviewed) does another review. We do have a 'testing-modules' path and use multiple environments so that we can have code in "testing" on live systems without having to do it all offline. Anyway, once something is committed to that SVN repo it pushes on up to the central service and rolls out to production. No further review is needed. However, since the *only* way to get code into production[1] we have the absolute knowledge that every change made to the puppet code has the user id attached to it. So, if someone skips out on review and commits something we can know who it was, and fix the problem, whatever that is, that meant they didn't follow process. > To me, supporting many authors is one of the most difficult problems scaling > puppet. It's really hard to strike a balance between, on one hand, > safeguarding the stability of production puppet config and, on the other > hand, enabling people to get work done without a big, slow, complicated > process. Yup. That works for us because we have a small, collocated team in the same time zone. If I had to scale that to multiple sites and especially multiple time zones I would do it a bit differently: At the moment we use the honour system to manage code review; in a larger scale system I would require that the commit message document the reviewer and use a repository hook to verify that.[2] (If necessary I would also put a smaller team of folks between the production checkout and the place that anyone can check in, and use them to vet changes, but that adds a huge amount of cost.) Once you have the social control of your team knowing, without a shadow of a doubt, that if something they commit to puppet will be identified back to them, most of the problems go away. People generally want to play nice, and knowing they will be caught if they cowboy things and it blows up stops most of the abuses. We still get the occasional problem, but they are pretty self-correcting. Finally, if you really want to get scary-big I would reach out for the tools that support good development practices: something like Gerrit, plus some sort of CI toolchain[3], so that changes get a real review and approval history. Daniel Footnotes: [1] Technically someone could directly edit the files on the puppetmaster, but that usually breaks the next "proper" commit and our audit logs show who accessed the system and all. [2] Check it has a userid, check that userid is known to the system, check they are in the list of approved reviews. (...and this can scale up, later, to allow reviewers to "own" part of the puppet codebase.) [3] ...at least for syntax checking on Puppet manifests and ERB files. -- ✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707 ♽ made with 100 percent post-consumer electrons -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] Same Service in Different Classes
On Nov 15, 2010, at 9:53 PM, Daniel Pittman wrote: > Otherwise, we use code review before committing to the central repository to > help reduce the risk of issues - now *two* operations people need to be > ignorant of class C for the issue to pop up. Hi Daniel - it's a bit off-topic from the original question but would you mind explaining how your code review works in more detail? I'm curious about both the technology and people process involved. To me, supporting many authors is one of the most difficult problems scaling puppet. It's really hard to strike a balance between, on one hand, safeguarding the stability of production puppet config and, on the other hand, enabling people to get work done without a big, slow, complicated process. - Eric Sorenson - N37 17.255 W121 55.738 - http://twitter.com/ahpook - -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] Same Service in Different Classes
Yushu Yao writes: >> > Just wondering is there a way to work around this? >> >> Define the service in a third class and include that third class in each of >> the first two. >> >> class a { include c } class b { include c } class c { service { >> "foobar": } } >> >> Classes can be included multiple times without trying to create duplicates >> of the resources they contain. > > What if class a and class b are in two different modules that are developed > by different developer? They will not necessarily know where class c is. We use the puppetdoc tools to help with that communication: http://projects.puppetlabs.com/projects/1/wiki/Puppet_Manifest_Documentation That makes it easier for your two developers to see the existence of class c and all. (Good naming makes it easier, of course. :) Otherwise, we use code review before committing to the central repository to help reduce the risk of issues - now *two* operations people need to be ignorant of class C for the issue to pop up. If that also fails we eventually live with the risk: when the manifest deploys with the duplication service definition we get a failure report because of it. Then we identify which manifest (A or B) is wrong, fix it to use manifest C, and go on from there. While that failure is in place we can't make operational changes to the machine using the combination of bad classes, but that isn't too much of a problem. Historically we would have been worse off because B would have broken A without any warning. ;) Daniel -- ✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707 ♽ made with 100 percent post-consumer electrons -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] Same Service in Different Classes
On Mon, Nov 15, 2010 at 9:16 PM, Yushu Yao wrote: > > > >> >> Why do both classes need to have the same service defined? >> > Simple use case: > > Want to define two apache based web services on the same server. (e.g. > passenger + turbogears) > > I think module-based definitions are the key concept behind puppet (no?), so > we use one module for turbogears, another module for passenger. > > They both need to have control over the Service["httpd"] You should have an apache/httpd module for the webserver itself. It will contain the Service definition. Then your modules for turbogears and passenger should only need to notify Service["httpd"], not actually define the service again. > > What is the best way to implement this? > > -Yushu > > > >> >> > >> > Just wondering is there a way to work around this? >> > >> > Thank you ! >> > >> > -Yushu >> > >> > >> > -- >> > You received this message because you are subscribed to the Google >> > Groups >> > "Puppet Users" group. >> > To post to this group, send email to puppet-us...@googlegroups.com. >> > To unsubscribe from this group, send email to >> > puppet-users+unsubscr...@googlegroups.com. >> > For more options, visit this group at >> > http://groups.google.com/group/puppet-users?hl=en. >> > >> >> >> >> -- >> Nigel Kersten - Puppet Labs - http://www.puppetlabs.com >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Puppet Users" group. >> To post to this group, send email to puppet-us...@googlegroups.com. >> To unsubscribe from this group, send email to >> puppet-users+unsubscr...@googlegroups.com. >> For more options, visit this group at >> http://groups.google.com/group/puppet-users?hl=en. >> > > -- > You received this message because you are subscribed to the Google Groups > "Puppet Users" group. > To post to this group, send email to puppet-us...@googlegroups.com. > To unsubscribe from this group, send email to > puppet-users+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/puppet-users?hl=en. > -- Nigel Kersten - Puppet Labs - http://www.puppetlabs.com -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] Same Service in Different Classes
> > Just wondering is there a way to work around this? > > Define the service in a third class and include that third class in > each of the first two. > > class a { >include c > } > class b { >include c > } > class c { >service { "foobar": } > } > > Classes can be included multiple times without trying to create > duplicates of the resources they contain. > What if class a and class b are in two different modules that are developed by different developer? They will not necessarily know where class c is. Thanks -Yushu > > Richard > > -- > You received this message because you are subscribed to the Google Groups > "Puppet Users" group. > To post to this group, send email to puppet-us...@googlegroups.com. > To unsubscribe from this group, send email to > puppet-users+unsubscr...@googlegroups.com > . > For more options, visit this group at > http://groups.google.com/group/puppet-users?hl=en. > > -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] Same Service in Different Classes
> Why do both classes need to have the same service defined? > > Simple use case: Want to define two apache based web services on the same server. (e.g. passenger + turbogears) I think module-based definitions are the key concept behind puppet (no?), so we use one module for turbogears, another module for passenger. They both need to have control over the Service["httpd"] What is the best way to implement this? -Yushu > > > > > Just wondering is there a way to work around this? > > > > Thank you ! > > > > -Yushu > > > > > > -- > > You received this message because you are subscribed to the Google Groups > > "Puppet Users" group. > > To post to this group, send email to puppet-us...@googlegroups.com. > > To unsubscribe from this group, send email to > > puppet-users+unsubscr...@googlegroups.com > . > > For more options, visit this group at > > http://groups.google.com/group/puppet-users?hl=en. > > > > > > -- > Nigel Kersten - Puppet Labs - http://www.puppetlabs.com > > -- > You received this message because you are subscribed to the Google Groups > "Puppet Users" group. > To post to this group, send email to puppet-us...@googlegroups.com. > To unsubscribe from this group, send email to > puppet-users+unsubscr...@googlegroups.com > . > For more options, visit this group at > http://groups.google.com/group/puppet-users?hl=en. > > -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] bug with using exported resources?
You have to use @@sshkey { $fqdn: type => rsa, key => $sshrsakey, host_aliases => [ $hostname, $ipaddress], } The following happened: Puppet joins resourcename and hostaliases with a "," to put it in the file. Because you put everything in the resourcename you end up with $fqdn,$hostname,$ipaddres Puppet wrote an entry "$fqdn,$hostname,$ipaddress $type $key" to your known hosts. On the second run it reads the lines again and know does a split(",") on the first field. First item ($fqdn) will be interpreted as the resourcename, all the other items ($hostname,$ipaddress) will be interpreted as host_aliases. Puppet recognised that there is no resource called $fqdn,$hostname,$ipaddress present in the file and creates it again. You should file a bug about the sshkey type not raising an Error if you define a resourcename with "," in it. -Stefan On Mon, Nov 15, 2010 at 06:02:59AM -0800, Christopher McCrory wrote: > Hello... > > > Is this a bug or by design? > > I'm using exported resources to generate /etc/ssh/ssh_known_hosts. I > changed the example from the docs to this: > > @@sshkey { > "$fqdn,$hostname,$ipaddress": type => rsa, > key => $sshrsakey, > } > > so that I would get one line per host in the ssh_know_hosts file. What > happened was that on each run several (all?) keys exported would be > re-added. At one point I counted 34 duplicate entries. I changed the > module to: > >@@sshkey { > "$fqdn": type => rsa, >key => $sshrsakey, > } > @@sshkey { > "$hostname": type => rsa, >key => $sshrsakey, > } > @@sshkey { > "$ipaddress": type => rsa, >key => $sshrsakey, > } > > And now I get three entries for each host and no duplicates. IS this a > bug? > > > > Using puppet 0.25.4 on Ubuntu 10.04 on the client and puppet 0.25.5 > from epel on centos. all 32bit servers. > > > > > -- > Christopher McCrory > To the optimist, the glass is half full. > To the pessimist, the glass is half empty. > To the engineer, the glass is twice as big as it needs to be. > > -- > You received this message because you are subscribed to the Google Groups > "Puppet Users" group. > To post to this group, send email to puppet-us...@googlegroups.com. > To unsubscribe from this group, send email to > puppet-users+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/puppet-users?hl=en. > pgpXf3zlIZUwq.pgp Description: PGP signature
[Puppet Users] Re: puppet +with build support
On Nov 15, 11:11 am, "sanjiv.singh" wrote: > 1) Is there any machnism in which we can select puppet modules > according to build number..? Look at the support for modulepath with multiple environments[1]. You can set the "environment" value to any string. So you can use revision numbers or tags [1002, 1003, X, Y] instead of [production, testing, development] for $environment. This way clients can be tied to a 'tag' of modules, while defaulting to 'main' path for unknown or unset "environment"s. > 2) Is there any machnism in which we can make puppet modules/ classes > argumented , so that it work according to build number ? Parameterized classes[2] may work for you, depending on what you need. You could set the $build_version from a customer Facter fact or from External Node Classifier[3] or LDAP Nodes. > 3) Have puppet inbuild support for versioning ? Basically it relies on your puppet master manifest & module content to come from the VCS of your choice. For example /etc/puppet/modules would be an work directory of svn://puppet/branches/production/puppet/modules/. You can also track the catalog "version" based on the output of a script[4]. So you could provide a version based on `svn info /etc/ puppet/`, for example. Be careful that config_version is built on tracking changes to manifest files. It may miss changes in Resources collected from storeconfigs, File resources, template content, etc. > specificaly , i am going through critical time, where i need to > configure one node with build number X for one development > team ... > and need to configure second node with build number Y for second > testing team. I think this sounds like setting $environment on a per host basis using External Nodes. [1] http://projects.puppetlabs.com/projects/1/wiki/Using_Multiple_Environments [2] http://docs.puppetlabs.com/guides/language_tutorial.html#parameterised-classes [3] http://docs.puppetlabs.com/guides/external_nodes.html [4] http://docs.puppetlabs.com/references/latest/configuration.html#configversion -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] Same Service in Different Classes
On Mon, Nov 15, 2010 at 3:08 PM, Yushu Yao wrote: > Hi Experts, > > I define two classes, each of them has the same service defined in it, if I > include both classes for a node it will fail complaining "Duplicated > definition: Service[xxx]". Why do both classes need to have the same service defined? > > Just wondering is there a way to work around this? > > Thank you ! > > -Yushu > > > -- > You received this message because you are subscribed to the Google Groups > "Puppet Users" group. > To post to this group, send email to puppet-us...@googlegroups.com. > To unsubscribe from this group, send email to > puppet-users+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/puppet-users?hl=en. > -- Nigel Kersten - Puppet Labs - http://www.puppetlabs.com -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
[Puppet Users] Re: proper way to purge DB data for retired hosts
On Nov 15, 7:09 am, Christopher McCrory wrote: > Hello... > > I've been testing some new servers. I'm using exported resources for > several configs (see other email on ssh_known_hosts), including the > nagios tyoes (very cool!). Now I need to retire several test servers. > How do I 'properly' purge the exported data for these test servers from > the mysql DB on the puppetmaster? > > /me not a SQL guru... Check out puppetstoredconfigclean.rb[1]. That will purge the complete record of each host from the storeconfig DB. [1] https://github.com/puppetlabs/puppet/blob/master/ext/puppetstoredconfigclean.rb -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] Same Service in Different Classes
> I define two classes, each of them has the same service defined in it, if I > include both classes for a node it will fail complaining "Duplicated > definition: Service[xxx]". > > Just wondering is there a way to work around this? Define the service in a third class and include that third class in each of the first two. class a { include c } class b { include c } class c { service { "foobar": } } Classes can be included multiple times without trying to create duplicates of the resources they contain. Richard -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
[Puppet Users] Same Service in Different Classes
Hi Experts, I define two classes, each of them has the same service defined in it, if I include both classes for a node it will fail complaining "Duplicated definition: Service[xxx]". Just wondering is there a way to work around this? Thank you ! -Yushu -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
[Puppet Users] Re: File type failing how to avoid this?
Thank you!!! Pretty much that answers my question. Thanks again. On Nov 15, 6:56 am, jcbollinger wrote: > On Nov 12, 12:28 pm, Roberto Bouza wrote: > > > Hello, > > > Up to right now everything is working great with puppet. I just have > > one questions. > > > Is there a way to tell a type (like file) not to fail if something > > specific happens. > > At that level of generality, why would you consider it anything other > than a failure when Puppet cannot put the system into the state you > asked it to achieve? Puppet will apply as many resources as it can > do, despite any failures, but it doesn't have a sense of optional > state components. > > If you have not already done so, you may find it useful to read the > documentation on the specific resource types you are trying to > employ:http://docs.puppetlabs.com/#resource-types. > > > Let's say I have a directory which it needs to be created if its not > > there and then I mount a file system "ro" on top of that. > > Puppet is good at that sort of thing. > > > The first time it'll work but the second time it will fail with an > > error saying the directory is "ro" and it will fail on recursion. > > What does recursion have to do with it? Anyway, it sounds like you > may have an error in your manifests. > > > There has to be a way to tell puppet that when is a "ro" just check if > > the file is there don't create it (if you ar elooking for a file > > inside a "ro" direcotry) > > Puppet will not modify a file (or directory) it is managing if that > file already has the characteristics you told Puppet it should have. > You don't have to do anything special to get that behavior. > Furthermore, you can specify (replace => "no") that Puppet should not > modify the content of a managed file if it already exists; that's not > relevant for directories or symlinks because they don't have content > as such. > > > I don't know if its clear what I'm trying to achieve. > > No, it's not clear. I will take a stab at giving you something > useful, but in the (likely) event that I miss, do please post example > manifests that demonstrate your problem. > > First, to ensure the presence of a directory named "/ro", owned by > root:root and writable only by root: > > file { "/ro": > ensure => "directory", > owner => "root", > group => "root", > mode => "0755", > # The following should be the default, but since > # you mentioned a recursion problem: > recurse => false > > } > > Puppet will attempt at every run to ensure that the specified > directory exists and has the specified ownership and mode. If you > have a file system mounted on it, then that file system may present > its view of the owner and mode of the file system root, and that's > what Puppet will work with. > > Next, to ensure that a file system mount is defined (e.g. in /etc/ > fstab) and that the corresponding file system is, in fact, mounted: > > # (This version is for an NFS file system. > # Adjust as necessary for other FS types.) > mount { "/ro": > # example: > device => "server.my.com:/exports/ro_remote", > fstype => "nfs", > # I infer from the name that you want a read-only mount: > options => "ro", > ensure => "mounted", > # Puppet should assume this automatically, but it doesn't > # hurt to be explicit, especially when debugging: > require => File["/ro"] > > } > > There are more Mount properties you may want to tweak for your > particular situation. > > Cheers, > > John -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] Exported resources, stale checksums in state.yaml, and eternally growing filebuckets
Nick Moffitt: > # md5sum /home/foo/.ssh/authorized_keys; puppetd --environment=staging > -t | grep 'checksum changed'; md5sum /home/foo/.ssh/authorized_keys > fc9e4d3f84f99cff14a16dbe20f0db70 /home/foo/.ssh/authorized_keys > notice: > /Stage[main]//Node[central.example.com]/File[/home/foo/.ssh/authorized_keys]/checksum: > checksum changed '{md5}7c2a499471221f2511afde8e2ca3c329' to > '{md5}fc9e4d3f84f99cff14a16dbe20f0db70' > 8492d19fb29b15d52c916a8d60c4b55c /home/foo/.ssh/authorized_keys Well it would appear that this may be a bug: http://projects.puppetlabs.com/issues/5301 -- "As I soared high into the tag cloud Xeni Jardin carefully put up for me, I couldn't help but wonder how high we were above the blogosphere." -- Carlos Laviola -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
[Puppet Users] puppet +with build support
hi all, I am working with development environment .( configured puppet + LDAP ) each day build number changes from X to Y .( there may be changes in templates and deployable files -addition/ deletion. or in configuration parameters ). Once i have configured puppet modules for build number X.,i m able to configure node according to build no X. Suddendly need to setup puppet moludes for build number Y ...so that nodes can be configured for build number Y. what i need here that i want to setup puppet in such a way that it can support build numbers means that it would be able to configure nodes according build number , we want . i had setup puppet modules according to specific build. 1) Is there any machnism in which we can select puppet modules according to build number..? another way... . 2) Is there any machnism in which we can make puppet modules/ classes argumented , so that it work according to build number ? another way. 3) Have puppet inbuild support for versioning ? specificaly , i am going through critical time, where i need to configure one node with build number X for one development team ... and need to configure second node with build number Y for second testing team. (hopefully i m able to explan what i m trying to say.) Thanks & Regards - Sanjiv Singh (iLabs) Impetus Infotech (India). -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
[Puppet Users] "Could not retrieve catalog from remote server" messages
Ever since I've upgraded from 2.6.0 to 2.6.2 I've been randomly getting the following reports from my clients: Sun Nov 14 16:13:35 -0600 2010 Puppet (err): Could not retrieve catalog from remote server: end of file reached Sun Nov 14 16:13:35 -0600 2010 Puppet (notice): Using cached catalog Sun Nov 14 16:13:35 -0600 2010 Puppet (err): Could not retrieve catalog; skipping run When I run puppetd -v -t I get the following: Mon Nov 15 11:22:11 -0600 2010 Puppet (info): Caching catalog for [machine].[domain] Mon Nov 15 11:22:11 -0600 2010 Puppet (info): Applying configuration version '1289728925' Is this normal behavior? I can post relevant log and conf files upon request. -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] Exported resources, stale checksums in state.yaml, and eternally growing filebuckets
Nick Moffitt: > Further, grepping for a chunk of the options in this resource in the > clientbucket finds hundreds of entries, and it would appear that all > possible orderings are coming from the puppetmaster. I realize that > technically there is a finite limit to the number of permutations, but > this strikes me as wasteful. In fact, the waste came from the fact that the header for any provider descended from the parsedfile provider includes a timestamp. This will grow forever, unfortunately, even as the practical elements of the file do not change one bit. -- How do you get mailings?... from the lists 1. suspects 2. elbows -- Don Saklad -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] Re: agent needs to make two runs before master compiles new catalog
Can you ping me on #puppet at freenode? bodepd. -Dan On Mon, Nov 15, 2010 at 9:16 AM, Kent wrote: > Already tried filetimeout=0 with no success. > :( the description for the setting ignorecache in puppet.conf man page: > > " > Ignore cache and always recompile the configuration. This is > useful for testing new configurations, where the > local cache may in fact be stale even if the timestamps are up to date > - if the facts change or if the server > changes. > " > > This sounds like a server setting to me since it mentions compilation. > In any case, the puppetmaster's log reports that on every run it > expires the cached catalog for the host making the run, and that it > recompiles for the host. > > > On Nov 15, 8:46 am, Dan Bode wrote: > > Hi Kent, > > > > On Mon, Nov 15, 2010 at 8:05 AM, Kent wrote: > > > Nigel, > > > > > It is number-of-runs based. If I execute two runs in rapid succession > > > 2 seconds after changing a manifest on the puppetmaster, the new > > > config *will* be pushed on the second run. On the other hand I can > > > walk away for 10 minutes and when I then execute the runs, the new > > > config will still not take effect until the second run. > > > > > Is it likely this has something to do with catalog caching on the > > > master? I tried turning caching off by setting 'ignorecache = true' in > > > puppet.conf > > > > this is a client side configuration, it probably won't help. > > > > but this didn't help, so maybe this isn't the issue here. > > > > > > > > try setting filetimeout=0 on the puppet master > > > > for more information you can: > > > > man puppet.conf > > > > > > > > > > > > > > > > > -Kent > > > > > On Nov 14, 8:18 am, Nigel Kersten wrote: > > > > On Wed, Nov 10, 2010 at 1:08 AM, luke.bigum < > luke.bi...@fasthosts.co.uk> > > > wrote: > > > > > I've seen the same issue as well. I just tested then, adding a > simple > > > > > notify resource to a node and it took three consecutive runs of > > > > > puppetd before the message appeared: > > > > > > Is it the number of runs or is it simply time based? > > > > > > > # puppetd --test > > > > > info: Retrieving plugin > > > > > info: Caching catalog for puppet-master-01 > > > > > info: Applying configuration version '1289376693' > > > > > notice: Finished catalog run in 30.24 seconds > > > > > # puppetd --test > > > > > info: Retrieving plugin > > > > > info: Caching catalog for puppet-master-01 > > > > > info: Applying configuration version '1289377768' > > > > > notice: Finished catalog run in 24.98 seconds > > > > > # puppetd --test > > > > > info: Retrieving plugin > > > > > info: Caching catalog for puppet-master-01 > > > > > info: Applying configuration version '1289379786' > > > > > notice: foo > > > > > notice: /Stage[main]//Node[puppet-master-01]/Notify[test]/message: > > > > > defined 'message' as 'foo' > > > > > notice: Finished catalog run in 26.46 seconds > > > > > > > # /opt/ruby-enterprise/bin/gem list > > > > > > > *** LOCAL GEMS *** > > > > > > > facter (1.5.8) > > > > > fastthread (1.0.7) > > > > > mysql (2.8.1) > > > > > passenger (2.2.9) > > > > > puppet (2.6.2) > > > > > rack (1.1.0) > > > > > rake (0.8.7) > > > > > > > On Nov 9, 9:08 pm, Jeremy Carroll wrote: > > > > >> I am having the same issue, and am running about the same stack. > > > > > > >> CentOS 5.5 > > > > > > >> facter (1.5.8) > > > > >> fastthread (1.0.7) > > > > >> passenger (2.2.15) > > > > >> puppet (2.6.2) > > > > >> puppet-module (0.3.0) > > > > >> rack (1.1.0) > > > > >> rake (0.8.7) > > > > >> stomp (1.1.6) > > > > > > >> On Tue, Nov 9, 2010 at 2:50 PM, Kent > wrote: > > > > >> > Patrick, thanks for the speedy reply once again. > > > > > > >> > I'm using RHEL5 and Puppet 2.6.1, Passenger 2.2.7, Rack 1.1.0. > > > > > > >> > From what I've read in this group and in Puppet Labs docs/wikis, > > > > >> > Debian/Ubuntu users do seem to have an easier time generally > than > > > > >> > CentOS/Red Hat :-\ > > > > > > >> > Can I pass my command-line options to Puppetmasterd in the > > > config.ru > > > > >> > file? > > > > > > >> > -Kent > > > > > > >> > On Nov 9, 10:53 am, Patrick wrote: > > > > >> > > On Nov 9, 2010, at 9:34 AM, Kent wrote: > > > > > > >> > > > On Nov 8, 11:07 am, Patrick wrote: > > > > >> > > >> On Nov 8, 2010, at 9:10 AM, Kent wrote: > > > > > > >> > > >>> Hi all, > > > > > > >> > > >>> I'm a new puppet user and new to the forum. > > > > > > >> > > >>> I just switched my Puppetmaster to running inside Apache > (via > > > > >> > > >>> Passenger). When I make a change to a resource on the > master, > > > it > > > > >> > > >>> sometimes takes a given node TWO runs before the master > will > > > realize > > > > >> > > >>> the resource has changed and recompile a new catalog > version > > > for the > > > > >> > > >>> node. For example, say my puppetmaster is serving > > > configuration > > > > >> > > >>> version '123' to a node. I change the file permissions for > a > > > file > > > > >> > > >>> resource
[Puppet Users] Re: agent needs to make two runs before master compiles new catalog
Already tried filetimeout=0 with no success. the description for the setting ignorecache in puppet.conf man page: " Ignore cache and always recompile the configuration. This is useful for testing new configurations, where the local cache may in fact be stale even if the timestamps are up to date - if the facts change or if the server changes. " This sounds like a server setting to me since it mentions compilation. In any case, the puppetmaster's log reports that on every run it expires the cached catalog for the host making the run, and that it recompiles for the host. On Nov 15, 8:46 am, Dan Bode wrote: > Hi Kent, > > On Mon, Nov 15, 2010 at 8:05 AM, Kent wrote: > > Nigel, > > > It is number-of-runs based. If I execute two runs in rapid succession > > 2 seconds after changing a manifest on the puppetmaster, the new > > config *will* be pushed on the second run. On the other hand I can > > walk away for 10 minutes and when I then execute the runs, the new > > config will still not take effect until the second run. > > > Is it likely this has something to do with catalog caching on the > > master? I tried turning caching off by setting 'ignorecache = true' in > > puppet.conf > > this is a client side configuration, it probably won't help. > > but this didn't help, so maybe this isn't the issue here. > > > > try setting filetimeout=0 on the puppet master > > for more information you can: > > man puppet.conf > > > > > > > > > -Kent > > > On Nov 14, 8:18 am, Nigel Kersten wrote: > > > On Wed, Nov 10, 2010 at 1:08 AM, luke.bigum > > wrote: > > > > I've seen the same issue as well. I just tested then, adding a simple > > > > notify resource to a node and it took three consecutive runs of > > > > puppetd before the message appeared: > > > > Is it the number of runs or is it simply time based? > > > > > # puppetd --test > > > > info: Retrieving plugin > > > > info: Caching catalog for puppet-master-01 > > > > info: Applying configuration version '1289376693' > > > > notice: Finished catalog run in 30.24 seconds > > > > # puppetd --test > > > > info: Retrieving plugin > > > > info: Caching catalog for puppet-master-01 > > > > info: Applying configuration version '1289377768' > > > > notice: Finished catalog run in 24.98 seconds > > > > # puppetd --test > > > > info: Retrieving plugin > > > > info: Caching catalog for puppet-master-01 > > > > info: Applying configuration version '1289379786' > > > > notice: foo > > > > notice: /Stage[main]//Node[puppet-master-01]/Notify[test]/message: > > > > defined 'message' as 'foo' > > > > notice: Finished catalog run in 26.46 seconds > > > > > # /opt/ruby-enterprise/bin/gem list > > > > > *** LOCAL GEMS *** > > > > > facter (1.5.8) > > > > fastthread (1.0.7) > > > > mysql (2.8.1) > > > > passenger (2.2.9) > > > > puppet (2.6.2) > > > > rack (1.1.0) > > > > rake (0.8.7) > > > > > On Nov 9, 9:08 pm, Jeremy Carroll wrote: > > > >> I am having the same issue, and am running about the same stack. > > > > >> CentOS 5.5 > > > > >> facter (1.5.8) > > > >> fastthread (1.0.7) > > > >> passenger (2.2.15) > > > >> puppet (2.6.2) > > > >> puppet-module (0.3.0) > > > >> rack (1.1.0) > > > >> rake (0.8.7) > > > >> stomp (1.1.6) > > > > >> On Tue, Nov 9, 2010 at 2:50 PM, Kent wrote: > > > >> > Patrick, thanks for the speedy reply once again. > > > > >> > I'm using RHEL5 and Puppet 2.6.1, Passenger 2.2.7, Rack 1.1.0. > > > > >> > From what I've read in this group and in Puppet Labs docs/wikis, > > > >> > Debian/Ubuntu users do seem to have an easier time generally than > > > >> > CentOS/Red Hat :-\ > > > > >> > Can I pass my command-line options to Puppetmasterd in the > > config.ru > > > >> > file? > > > > >> > -Kent > > > > >> > On Nov 9, 10:53 am, Patrick wrote: > > > >> > > On Nov 9, 2010, at 9:34 AM, Kent wrote: > > > > >> > > > On Nov 8, 11:07 am, Patrick wrote: > > > >> > > >> On Nov 8, 2010, at 9:10 AM, Kent wrote: > > > > >> > > >>> Hi all, > > > > >> > > >>> I'm a new puppet user and new to the forum. > > > > >> > > >>> I just switched my Puppetmaster to running inside Apache (via > > > >> > > >>> Passenger). When I make a change to a resource on the master, > > it > > > >> > > >>> sometimes takes a given node TWO runs before the master will > > realize > > > >> > > >>> the resource has changed and recompile a new catalog version > > for the > > > >> > > >>> node. For example, say my puppetmaster is serving > > configuration > > > >> > > >>> version '123' to a node. I change the file permissions for a > > file > > > >> > > >>> resource that's part of that catalog and then do a puppet run > > on the > > > >> > > >>> node. If I'm running with Passenger, the master serves config > > version > > > >> > > >>> '123' one more time (the agent makes no changes). The next > > time I run > > > >> > > >>> the node's agent, the master compiles new catalog version > > '456' and > > > >> > > >>> the agent makes the permission change. > > > > >> > > >>> A few items of note: > > >
[Puppet Users] Exported resources, stale checksums in state.yaml, and eternally growing filebuckets
I have found what I believe to be incorrect checksums in state.yaml, and somewhat wasteful thrashing in the contents of exported ssh_authorized_key resources (and possibly others). My ultimate goal is to create a "stop the line" sort of system: if someone has manually edited a puppet-managed file, the next catalog collection will grind to a halt and alerting systems will send out notifications. To this end, I have done the following: * I have a custom fact that parses state.yaml into a format suitable for being fed into md5sum -c, and returns true if any of the checksums fail. * I have a module that calls fail() if the custom fact is true. This system actually works rather well, I find! My problem is that I have an exported resource to allow ssh triggering of commands on a central machine from a set of other machines: @@ssh_authorized_key { "u...@$hostname": key => $user_rsa_key, type => 'ssh-rsa', user => 'foo', options => "command=\"...\",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,from=\"$ipaddress\"", } And then: node 'central.example.com' { # Create the authkeys file automatically Ssh_authorized_key <<| user == "foo" |>> } The entry for /home/foo/.ssh/authorized_keys in state.yaml causes my md5sum system to fail every time. Upon inspection, I note that the entry in state.yaml is exactly one revision out of date! # md5sum /home/foo/.ssh/authorized_keys; puppetd --environment=staging -t | grep 'checksum changed'; md5sum /home/foo/.ssh/authorized_keys fc9e4d3f84f99cff14a16dbe20f0db70 /home/foo/.ssh/authorized_keys notice: /Stage[main]//Node[central.example.com]/File[/home/foo/.ssh/authorized_keys]/checksum: checksum changed '{md5}7c2a499471221f2511afde8e2ca3c329' to '{md5}fc9e4d3f84f99cff14a16dbe20f0db70' 8492d19fb29b15d52c916a8d60c4b55c /home/foo/.ssh/authorized_keys And then in state.yaml: File[/home/foo/.ssh/authorized_keys]: :checked: 2010-11-15 12:52:54.896678 +00:00 :checksums: :md5: "{md5}fc9e4d3f84f99cff14a16dbe20f0db70" :synced: 2010-11-15 12:52:54.899011 +00:00 Shouldn't the system have noticed a change from "{md5}fc9e4d3f84f99cff14a16dbe20f0db70" to "{md5}8492d19fb29b15d52c916a8d60c4b55c" there? Further, grepping for a chunk of the options in this resource in the clientbucket finds hundreds of entries, and it would appear that all possible orderings are coming from the puppetmaster. I realize that technically there is a finite limit to the number of permutations, but this strikes me as wasteful. So partly I'm trying to understand how this works, but I would like to know two things: 1. Is there someplace with a "blessed" copy of the *current* checksum for this file? 2. Is there any way I can lock this exported resource to a specific ordering, or otherwise prevent it from updating when there has been no change in the component records? My puppetmaster is running 2.6.1-0ubuntu2 and central.example.com is running 0.25.4-2ubuntu6 (as are most of the other puppet clients, the remainder running the same as the master). -- "These people program the way Victorians dress. It takes two hours and three assistants to put on your clothes, and you have to change before dinner. But everything is modular."-- Miles Nordin, on PAM -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Re: [Puppet Users] Re: agent needs to make two runs before master compiles new catalog
Hi Kent, On Mon, Nov 15, 2010 at 8:05 AM, Kent wrote: > Nigel, > > It is number-of-runs based. If I execute two runs in rapid succession > 2 seconds after changing a manifest on the puppetmaster, the new > config *will* be pushed on the second run. On the other hand I can > walk away for 10 minutes and when I then execute the runs, the new > config will still not take effect until the second run. > > Is it likely this has something to do with catalog caching on the > master? I tried turning caching off by setting 'ignorecache = true' in > puppet.conf this is a client side configuration, it probably won't help. but this didn't help, so maybe this isn't the issue here. > try setting filetimeout=0 on the puppet master for more information you can: man puppet.conf > -Kent > > On Nov 14, 8:18 am, Nigel Kersten wrote: > > On Wed, Nov 10, 2010 at 1:08 AM, luke.bigum > wrote: > > > I've seen the same issue as well. I just tested then, adding a simple > > > notify resource to a node and it took three consecutive runs of > > > puppetd before the message appeared: > > > > Is it the number of runs or is it simply time based? > > > > > > > > > > > > > > > > > > > > > > > > > # puppetd --test > > > info: Retrieving plugin > > > info: Caching catalog for puppet-master-01 > > > info: Applying configuration version '1289376693' > > > notice: Finished catalog run in 30.24 seconds > > > # puppetd --test > > > info: Retrieving plugin > > > info: Caching catalog for puppet-master-01 > > > info: Applying configuration version '1289377768' > > > notice: Finished catalog run in 24.98 seconds > > > # puppetd --test > > > info: Retrieving plugin > > > info: Caching catalog for puppet-master-01 > > > info: Applying configuration version '1289379786' > > > notice: foo > > > notice: /Stage[main]//Node[puppet-master-01]/Notify[test]/message: > > > defined 'message' as 'foo' > > > notice: Finished catalog run in 26.46 seconds > > > > > # /opt/ruby-enterprise/bin/gem list > > > > > *** LOCAL GEMS *** > > > > > facter (1.5.8) > > > fastthread (1.0.7) > > > mysql (2.8.1) > > > passenger (2.2.9) > > > puppet (2.6.2) > > > rack (1.1.0) > > > rake (0.8.7) > > > > > On Nov 9, 9:08 pm, Jeremy Carroll wrote: > > >> I am having the same issue, and am running about the same stack. > > > > >> CentOS 5.5 > > > > >> facter (1.5.8) > > >> fastthread (1.0.7) > > >> passenger (2.2.15) > > >> puppet (2.6.2) > > >> puppet-module (0.3.0) > > >> rack (1.1.0) > > >> rake (0.8.7) > > >> stomp (1.1.6) > > > > >> On Tue, Nov 9, 2010 at 2:50 PM, Kent wrote: > > >> > Patrick, thanks for the speedy reply once again. > > > > >> > I'm using RHEL5 and Puppet 2.6.1, Passenger 2.2.7, Rack 1.1.0. > > > > >> > From what I've read in this group and in Puppet Labs docs/wikis, > > >> > Debian/Ubuntu users do seem to have an easier time generally than > > >> > CentOS/Red Hat :-\ > > > > >> > Can I pass my command-line options to Puppetmasterd in the > config.ru > > >> > file? > > > > >> > -Kent > > > > >> > On Nov 9, 10:53 am, Patrick wrote: > > >> > > On Nov 9, 2010, at 9:34 AM, Kent wrote: > > > > >> > > > On Nov 8, 11:07 am, Patrick wrote: > > >> > > >> On Nov 8, 2010, at 9:10 AM, Kent wrote: > > > > >> > > >>> Hi all, > > > > >> > > >>> I'm a new puppet user and new to the forum. > > > > >> > > >>> I just switched my Puppetmaster to running inside Apache (via > > >> > > >>> Passenger). When I make a change to a resource on the master, > it > > >> > > >>> sometimes takes a given node TWO runs before the master will > realize > > >> > > >>> the resource has changed and recompile a new catalog version > for the > > >> > > >>> node. For example, say my puppetmaster is serving > configuration > > >> > > >>> version '123' to a node. I change the file permissions for a > file > > >> > > >>> resource that's part of that catalog and then do a puppet run > on the > > >> > > >>> node. If I'm running with Passenger, the master serves config > version > > >> > > >>> '123' one more time (the agent makes no changes). The next > time I run > > >> > > >>> the node's agent, the master compiles new catalog version > '456' and > > >> > > >>> the agent makes the permission change. > > > > >> > > >>> A few items of note: > > > > >> > > >>> 1. This is not a problem with all changes to puppet module > content. > > >> > > >>> For example, if I change the source contents of a file in the > 'files' > > >> > > >>> directory of a module, the master will notice this immediately > and > > >> > the > > >> > > >>> puppet agent on the node will grab the new file on the first > run > > >> > > >>> following the change on the master. > > > > >> > > >> Fact: > > >> > > >> Files sent using "source" aren't part of the catalog. Instead, > the > > >> > client asks the server for them while the client is using the > catalog and > > >> > not during the compilation done on the server. > > > > >> > > >> Speculation: > > >> > > >> I would guess this is because the problem you are having is > happen
[Puppet Users] Re: agent needs to make two runs before master compiles new catalog
Nigel, It is number-of-runs based. If I execute two runs in rapid succession 2 seconds after changing a manifest on the puppetmaster, the new config *will* be pushed on the second run. On the other hand I can walk away for 10 minutes and when I then execute the runs, the new config will still not take effect until the second run. Is it likely this has something to do with catalog caching on the master? I tried turning caching off by setting 'ignorecache = true' in puppet.conf but this didn't help, so maybe this isn't the issue here. -Kent On Nov 14, 8:18 am, Nigel Kersten wrote: > On Wed, Nov 10, 2010 at 1:08 AM, luke.bigum > wrote: > > I've seen the same issue as well. I just tested then, adding a simple > > notify resource to a node and it took three consecutive runs of > > puppetd before the message appeared: > > Is it the number of runs or is it simply time based? > > > > > > > > > > > > > # puppetd --test > > info: Retrieving plugin > > info: Caching catalog for puppet-master-01 > > info: Applying configuration version '1289376693' > > notice: Finished catalog run in 30.24 seconds > > # puppetd --test > > info: Retrieving plugin > > info: Caching catalog for puppet-master-01 > > info: Applying configuration version '1289377768' > > notice: Finished catalog run in 24.98 seconds > > # puppetd --test > > info: Retrieving plugin > > info: Caching catalog for puppet-master-01 > > info: Applying configuration version '1289379786' > > notice: foo > > notice: /Stage[main]//Node[puppet-master-01]/Notify[test]/message: > > defined 'message' as 'foo' > > notice: Finished catalog run in 26.46 seconds > > > # /opt/ruby-enterprise/bin/gem list > > > *** LOCAL GEMS *** > > > facter (1.5.8) > > fastthread (1.0.7) > > mysql (2.8.1) > > passenger (2.2.9) > > puppet (2.6.2) > > rack (1.1.0) > > rake (0.8.7) > > > On Nov 9, 9:08 pm, Jeremy Carroll wrote: > >> I am having the same issue, and am running about the same stack. > > >> CentOS 5.5 > > >> facter (1.5.8) > >> fastthread (1.0.7) > >> passenger (2.2.15) > >> puppet (2.6.2) > >> puppet-module (0.3.0) > >> rack (1.1.0) > >> rake (0.8.7) > >> stomp (1.1.6) > > >> On Tue, Nov 9, 2010 at 2:50 PM, Kent wrote: > >> > Patrick, thanks for the speedy reply once again. > > >> > I'm using RHEL5 and Puppet 2.6.1, Passenger 2.2.7, Rack 1.1.0. > > >> > From what I've read in this group and in Puppet Labs docs/wikis, > >> > Debian/Ubuntu users do seem to have an easier time generally than > >> > CentOS/Red Hat :-\ > > >> > Can I pass my command-line options to Puppetmasterd in the config.ru > >> > file? > > >> > -Kent > > >> > On Nov 9, 10:53 am, Patrick wrote: > >> > > On Nov 9, 2010, at 9:34 AM, Kent wrote: > > >> > > > On Nov 8, 11:07 am, Patrick wrote: > >> > > >> On Nov 8, 2010, at 9:10 AM, Kent wrote: > > >> > > >>> Hi all, > > >> > > >>> I'm a new puppet user and new to the forum. > > >> > > >>> I just switched my Puppetmaster to running inside Apache (via > >> > > >>> Passenger). When I make a change to a resource on the master, it > >> > > >>> sometimes takes a given node TWO runs before the master will > >> > > >>> realize > >> > > >>> the resource has changed and recompile a new catalog version for > >> > > >>> the > >> > > >>> node. For example, say my puppetmaster is serving configuration > >> > > >>> version '123' to a node. I change the file permissions for a file > >> > > >>> resource that's part of that catalog and then do a puppet run on > >> > > >>> the > >> > > >>> node. If I'm running with Passenger, the master serves config > >> > > >>> version > >> > > >>> '123' one more time (the agent makes no changes). The next time I > >> > > >>> run > >> > > >>> the node's agent, the master compiles new catalog version '456' and > >> > > >>> the agent makes the permission change. > > >> > > >>> A few items of note: > > >> > > >>> 1. This is not a problem with all changes to puppet module > >> > > >>> content. > >> > > >>> For example, if I change the source contents of a file in the > >> > > >>> 'files' > >> > > >>> directory of a module, the master will notice this immediately and > >> > the > >> > > >>> puppet agent on the node will grab the new file on the first run > >> > > >>> following the change on the master. > > >> > > >> Fact: > >> > > >> Files sent using "source" aren't part of the catalog. Instead, the > >> > client asks the server for them while the client is using the catalog and > >> > not during the compilation done on the server. > > >> > > >> Speculation: > >> > > >> I would guess this is because the problem you are having is > >> > > >> happening > >> > during the compilation on the server. > > >> > > >>> 2. At first I thought maybe this was a timing issue (e.g. I was > >> > doing > >> > > >>> the puppet run too quickly after making the resource change) but > >> > > >>> it's > >> > > >>> not; whether I wait 5 seconds or 5 minutes before making the first > >> > > >>> puppet run, the master still doesn't notice the change. I set
[Puppet Users] Filebucket log messages include file content
Hi, I've recently upgraded our puppetmaster to 2.6. Mostly, everything is fine. However, one thing that I've noticed is that a 0.24 client, when replacing a file, will log the contents of the file in its syslog and also in its report emails. Mon Nov 15 14:50:30 + 2010 /Stage[main]/Misc-apps::Mms-app/Misc- apps::Misc-apps::Datasource[mms-ds.xml]/File[/usr/local/jboss/server/ mms/deploy/mms-ds.xml] (notice): Filebucketed to main with sum MMSDS [... rest of file ...] A 0.25 client doesn't do this; it will log something like Mon Nov 15 15:00:21 + 2010 /Stage[main]/Misc-apps::Bes-app/Misc- apps::Misc-apps::Datasource[bes-ds.xml]/File[/usr/local/jboss/server/ bes/deploy/bes-ds.xml]/content (notice): content changed '{md5} ba6c7a361a64eb7768d8b790bae549a0' to 'unknown checksum' but never actually logs a message to say that it's filebucketing file old file (although the old file _is_ preserved in the bucket) A 0.24 client talking to a 0.24 server logs this: Fri Jul 30 14:33:17 +0100 2010 //Node[jo-wsos-ap]/webgroups-app/build- user/File[/export/home/build/.ssh/known_hosts] (notice): Filebucketed to main with sum ade04634fd072069a1a474d78c572271 My config looks like this: filebucket { main: server => "puppetmaster.domain" } File { backup => main, } So; on to the question: Can I stop 0.24 clients from printing out file contents when taking to a 2.6 master ? It's a bit of a security issue when the files contain passwords or other sensitive information - especially if it happens to get emailed out, or pushed onto the network via syslog. Cheers, Chris -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
[Puppet Users] Re: Monitor puppet runs on clients with nagios
In the end I just changed my script to grep for 'Failed' in the reports YAML files. My script already uses the time of the most recent report YAML file to detect if it's been too long since the most recent report (eg. if the puppetd process has died or something). I'll wait for http://projects.puppetlabs.com/issues/4339 to be completed I think. Tim On Nov 12, 2:45 pm, Doug Warner wrote: > I use tmz's puppetstatus scripts [1] [2] and they work great for checking the > last run time from Nagios. I also have reports setup w/ tagmail to send me > anything with "err" in it. > > -Doug > > [1]http://markmail.org/message/m6xi34aljso4w5qq > [2]http://tmz.fedorapeople.org/scripts/puppetstatus/ > > On 11/11/2010 09:09 AM, Tim wrote: > > > Hi, > > > I was wondering how people here monitor puppet runs on the clients. > > For puppet 0.25.x I enabled reporting and then wrote a nagios plugin > > to parse the YAML report files that each client returned after a run. > > Specifically I was looking for any 'failures' or 'failed_restarts'. > > > Unfortunately with 2.6.2 the format of those YAML files has not only > > changed but also varies hugely for different hosts depending on how > > the run went. Plus the sheer size of these files now means it takes > > too long for PyYAML to parse them (even for only 40 odd hosts). > > > In fact, I don't understand what the YAML reports are useful for - > > they don't appear to realistically be either human or machine > > readable. > > > Anyway what other approaches are there? I'd like to simply see 2 > > things: > > 1) If there were any failures during the puppet run on the client > > 2) When the last puppet run on each client was (ie. if it was more > > than 50 mins ago raise a warning) > > > > signature.asc > < 1KViewDownload -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
[Puppet Users] Re: File type failing how to avoid this?
On Nov 12, 12:28 pm, Roberto Bouza wrote: > Hello, > > Up to right now everything is working great with puppet. I just have > one questions. > > Is there a way to tell a type (like file) not to fail if something > specific happens. At that level of generality, why would you consider it anything other than a failure when Puppet cannot put the system into the state you asked it to achieve? Puppet will apply as many resources as it can do, despite any failures, but it doesn't have a sense of optional state components. If you have not already done so, you may find it useful to read the documentation on the specific resource types you are trying to employ: http://docs.puppetlabs.com/#resource-types. > Let's say I have a directory which it needs to be created if its not > there and then I mount a file system "ro" on top of that. Puppet is good at that sort of thing. > The first time it'll work but the second time it will fail with an > error saying the directory is "ro" and it will fail on recursion. What does recursion have to do with it? Anyway, it sounds like you may have an error in your manifests. > There has to be a way to tell puppet that when is a "ro" just check if > the file is there don't create it (if you ar elooking for a file > inside a "ro" direcotry) Puppet will not modify a file (or directory) it is managing if that file already has the characteristics you told Puppet it should have. You don't have to do anything special to get that behavior. Furthermore, you can specify (replace => "no") that Puppet should not modify the content of a managed file if it already exists; that's not relevant for directories or symlinks because they don't have content as such. > I don't know if its clear what I'm trying to achieve. No, it's not clear. I will take a stab at giving you something useful, but in the (likely) event that I miss, do please post example manifests that demonstrate your problem. First, to ensure the presence of a directory named "/ro", owned by root:root and writable only by root: file { "/ro": ensure => "directory", owner => "root", group => "root", mode => "0755", # The following should be the default, but since # you mentioned a recursion problem: recurse => false } Puppet will attempt at every run to ensure that the specified directory exists and has the specified ownership and mode. If you have a file system mounted on it, then that file system may present its view of the owner and mode of the file system root, and that's what Puppet will work with. Next, to ensure that a file system mount is defined (e.g. in /etc/ fstab) and that the corresponding file system is, in fact, mounted: # (This version is for an NFS file system. # Adjust as necessary for other FS types.) mount { "/ro": # example: device => "server.my.com:/exports/ro_remote", fstype => "nfs", # I infer from the name that you want a read-only mount: options => "ro", ensure => "mounted", # Puppet should assume this automatically, but it doesn't # hurt to be explicit, especially when debugging: require => File["/ro"] } There are more Mount properties you may want to tweak for your particular situation. Cheers, John -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
[Puppet Users] Re: managing normal users with Puppet
In this context i have a question. I migrate an autoyast settings into Puppet modules. Originally users are created in the autoyast file for SLES9. Following setting i have for one of my user. true Unfortuniatially i can't find such a flag as a parameter for the puppet 'user' resource. Christian -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
[Puppet Users] proper way to purge DB data for retired hosts
Hello... I've been testing some new servers. I'm using exported resources for several configs (see other email on ssh_known_hosts), including the nagios tyoes (very cool!). Now I need to retire several test servers. How do I 'properly' purge the exported data for these test servers from the mysql DB on the puppetmaster? /me not a SQL guru... -- Christopher McCrory To the optimist, the glass is half full. To the pessimist, the glass is half empty. To the engineer, the glass is twice as big as it needs to be. -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
[Puppet Users] bug with using exported resources?
Hello... Is this a bug or by design? I'm using exported resources to generate /etc/ssh/ssh_known_hosts. I changed the example from the docs to this: @@sshkey { "$fqdn,$hostname,$ipaddress": type => rsa, key => $sshrsakey, } so that I would get one line per host in the ssh_know_hosts file. What happened was that on each run several (all?) keys exported would be re-added. At one point I counted 34 duplicate entries. I changed the module to: @@sshkey { "$fqdn": type => rsa, key => $sshrsakey, } @@sshkey { "$hostname": type => rsa, key => $sshrsakey, } @@sshkey { "$ipaddress": type => rsa, key => $sshrsakey, } And now I get three entries for each host and no duplicates. IS this a bug? Using puppet 0.25.4 on Ubuntu 10.04 on the client and puppet 0.25.5 from epel on centos. all 32bit servers. -- Christopher McCrory To the optimist, the glass is half full. To the pessimist, the glass is half empty. To the engineer, the glass is twice as big as it needs to be. -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
[Puppet Users] Next SPUG Meeting 17.11.2010 - Bern (CH)
Hi the Swiss Puppet User Group (SPUG) [1] will meet the next time this Wednesday around 19 o'clock in Bern [2]. Please note that you should announce yourself to the hosters so that you can get in. If you have any cool things to present to the local puppet community: Bring your slides! Cu there! ~pete [1] http://spug.ch [2] http://spug.ch/2010/11/14/next-spug-meeting-on-wednesday-17.html -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.