[Puppet Users] Re: Forcing puppetd ask puppemasterd for new changes

2009-07-01 Thread James Turnbull

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Pavel Shevaev wrote:
> On Thu, Jul 2, 2009 at 12:53 AM, Macno wrote:
>> Well, since by default the puppet daemon checks the puppetmaster every
>> 30 mins (can be reduced, but I don't think it's a great idea on large
>> installations)
> 
> Could you please tell me which config option is responsible for that?

- --runinterval

See http://reductivelabs.com/trac/puppet/wiki/ConfigurationReference.

Regards

James Turnbull

- --
Author of:
* Pro Linux Systems Administration
(http://tinyurl.com/linuxadmin)
* Pulling Strings with Puppet
(http://tinyurl.com/pupbook)
* Pro Nagios 2.0
(http://tinyurl.com/pronagios)
* Hardening Linux
(http://tinyurl.com/hardeninglinux)
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkpMTIgACgkQ9hTGvAxC30BKMACfRsR59b1o6BWM8TPN2jHuTK9L
tY4Anjj4aqxHj8HUY4ANM3el8WLo5WRH
=Hj6D
-END PGP SIGNATURE-

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Generating a file from a set of fragments on the puppetmaster

2009-07-01 Thread Paul Gear
Hi,

I'm trying to create a squid url_regex ACL source file for various
different sites.  Each site needs a slightly different configuration, so
my plan was to create the a bunch of files on the server, then drag them
down and concatenate them into a single file on the client.

I found http://reductivelabs.com/trac/puppet/wiki/CompleteConfiguration
and i've been trying to understand its approach to constructing a file
out of fragments.  Am i right in thinking that i need to distribute an
entire directory from the server, then use concatenated_file to combine
those files into one file on the puppet agent?

I'd rather not distribute the entire directory from the server, since it
contains custom content for each node.  Is there a way i can do this
with templates that include other files?  (Or templates that are plain
text rather than .erb?)  I'd really like to find a technique that
doesn't require separately copying the file fragments to the client also...

I've had 3 or 4 tries at getting the right approach and am still no
closer to a working solution.  Attached is my non-working attempt at a
class to do this - what am i doing wrong?  (Comments about what's not
working can be found after the @@@ comments.)  I thought this was a
fairly simple problem, but i've been banging my head against it all day
without any success.

Paul


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---

#
# puppet class to distribute a copy of squid's url_regex acl file 
nopasswordsites.txt.
#

class squid {
# ensure package is installed
package { "squid": ensure => installed }

# create a directory for storing the parts
$basedir = "/etc/squid/nopasswordsites.d"
file { $basedir:
ensure  => directory,
owner   => root,
group   => root,
mode=> 755,
}

# these are the files that should be in the directory
$squid_nopassword_files = [
"HEADER",
"$hostname.permanent",
"$hostname.temporary",
"ISP.$isp",
"FOOTER",
]

# @@@ We need another copy of the file names that's fully qualified 
because
# @@@ there's no basename function which can be used in the source 
specification
# @@@ in squid_nopassword_file above, and the content parameter on the 
file
# @@@ resource below needs a fully-qualified file name.
$squid_nopassword_files_fully_qualified = [
"$basedir/HEADER",
"$basedir/$hostname.permanent",
"$basedir/$hostname.temporary",
"$basedir/ISP.$isp",
"$basedir/FOOTER",
]

define squid_nopassword_file () {
file { "$basedir/$name":
owner   => root,
group   => root,
mode=> 644,
path=> "$basedir/$name",
source  => "puppet:///squid/nopasswordsites/$name",
}
}

# realise each of the files
# @@@ We seem to need to use a function for this rather than just using
# @@@ file { $squid_nopassword_files: ... } because in the current scope
# @@@ $name is the name of the class (squid), not the name of the file 
resource.
squid_nopassword_file { $squid_nopassword_files: }

# concatenate all those files into our target file
# @@@ This doesn't work!  It gives the error: "err: Could not retrieve 
catalog:
# @@@ Files must be fully qualified at 
/etc/puppet/modules/squid/manifests/init.pp:39"
# @@@ This occurs regardless of whether the $squid_nopassword_files or
# @@@ $squid_nopassword_files_fully_qualified
file { "/etc/squid/nopasswordsites.tmp":
owner   => root,
group   => root,
mode=> 644,
content => file($squid_nopassword_files_fully_qualified),
}

}




[Puppet Users] Re: Unable to get storedconfigs to work

2009-07-01 Thread Greg

Might as well... Its an easy enough fix... So long as it doesn't break
older versions of rails...

Greg

On Jul 1, 5:56 pm, Felix Schäfer  wrote:
> Am 01.07.2009 um 03:24 schrieb Greg:
>
> > I've gotten it working with 2.3.2... But I did have to put in the
> > require lines
> > as was mentioned in a previous message...
>
> I must say that I'm not very happy with this solution as it seems more  
> hackish than anything, but it does work when adding the few require  
> lines. Anyhow, this should be fixed to work with the current stable  
> rails, shall I reopen #2041 or file a new bug?
>
> Felix
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Puppet reparsing puppet.conf every 24 hours - is this configurable?

2009-07-01 Thread Greg

I don't want to disable reparsing though... I push out a puppet.conf
through Puppet, so if I disable reparsing I have to use a service with
a refresh option and I'm having trouble keeping Puppet running happily
when I use that... Gets stuck with its pants down...

Not that the config changes much anymore, its mostly for new
rollouts... But it is nice to be able to tweak the runinterval as the
site grows...

Greg

On Jul 1, 12:45 pm, Nigel Kersten  wrote:
> Does setting filetimeout to 0 work? Feels like you should be able to
> disable re-parsing the config files in Puppet if you want, and if that
> doesn't work, I'd file a bug.
>
>
>
> On Tue, Jun 30, 2009 at 6:22 PM, Greg wrote:
>
> > The netbackup fix isn't an option unfortunately - causes enough other
> > grief apparently to be not worth doing.
>
> > I went through the code and looked at what is involved in changing it
> > to
> > mtime as an option, and its quite trivial, even for someone like
> > myself
> > who doesn't know Ruby well enough yet. My main issue is that I don't
> > know what depends on that - ie. what its impact is... Maybe I submit
> > it as a patch and see what the powers that be think of it...
>
> > But on the other hand, its such a minor thing that has no real impact,
> > so its almost not worth it...
>
> > Greg
>
> > On Jul 1, 12:44 am, Nigel Kersten  wrote:
> >> On Mon, Jun 29, 2009 at 5:50 PM, Greg wrote:
>
> >> > Nigel,
>
> >> > Actually, its happening 10 mins into backups... And its using
> >> > Netbackup...
>
> >> > Looks like I'm stuck with it, unless its possible to get that check to
> >> > happen on mtime
> >> > instead of ctime... (Of course then theres the question of which is
> >> > more useful, etc...)
> >> > Its not a major issue... The only real issue is that it pollutes the
> >> > logs a little bit...
>
> >>http://seer.entsupport.symantec.com/docs/200644.htm
>
> >> a. At some sites, saving and restoring a file's atime, while leaving
> >> the file's ctime in its changed state, may present a problem. To cause
> >> NetBackup to not reset the file's access time, insert the keyword
> >> DO_NOT_RESET_FILE_ACCESS_TIME in the /usr/openv/netbackup/bp.conf file
> >> of the client.
>
> >> so this will mean that your atime is continually changing as files are
> >> backed up, but as the atime is not reset, the ctime will be left
> >> alone.
>
> >> Until I found that I had vague thoughts of a Puppet patch to use a
> >> checksum instead of ctime for parsed files, but this is the only time
> >> using the ctime has bothered me.
>
> >> > Thanks,
>
> >> > Greg
>
> >> > On Jun 30, 1:50 am, Nigel Kersten  wrote:
> >> >> On Sun, Jun 28, 2009 at 4:42 PM, Greg wrote:
>
> >> >> > Hi all,
>
> >> >> > I've noticed roughly every 24 hours my puppetmasters will reread the
> >> >> > puppet.conf even if there is no change to
> >> >> > the file. Logs look something like this:
>
> >> >> > Jun 27 18:15:10 puppet-prod puppetmasterd[15161]: [ID 702911
> >> >> > daemon.notice] Sat Jun 27 18:09:43 +1000 2009 vs Fri Jun 26 18:06:11
> >> >> > +1000 2009
> >> >> > Jun 27 18:15:17 puppet-prod puppetmasterd[15200]: [ID 702911
> >> >> > daemon.notice] Reparsing /etc/opt/csw/puppet/puppet.conf
>
> >> >> > Does anyone know if we can influence the frequency of this? I'd like
> >> >> > to make it less frequent as it re-reads the config file as soon as its
> >> >> > changed anyway (not that it changes much anyway)... Maybe weekly is
> >> >> > sufficient...
>
> >> >> Does this happen to correlated to backup times on these servers?
>
> >> >> I ran into an issue with NetBackup where by default it restores the
> >> >> atime of a file after backing it up, which modifies the ctime of the
> >> >> file, which causes Puppet to think that the file has changed and
> >> >> reparse it.
>
> >> >> > thanks,
>
> >> >> > Greg
>
> >> >> --
> >> >> Nigel Kersten
> >> >> nig...@google.com
> >> >> System Administrator
> >> >> Google, Inc.
>
> >> --
> >> Nigel Kersten
> >> nig...@google.com
> >> System Administrator
> >> Google, Inc.
>
> --
> Nigel Kersten
> nig...@google.com
> System Administrator
> Google, Inc.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Forcing puppetd ask puppemasterd for new changes

2009-07-01 Thread Pavel Shevaev

On Thu, Jul 2, 2009 at 12:53 AM, Macno wrote:
>
> Well, since by default the puppet daemon checks the puppetmaster every
> 30 mins (can be reduced, but I don't think it's a great idea on large
> installations)

Could you please tell me which config option is responsible for that?

if you want to trigger a puppetrun whenever you want,
> puppetrun is the way.
> This can leave you the option to force the puppet daemon checks (or
> cronjob runs) less frequently, which is a not a bad thing in most
> cases.

The most flexible way for me would be to decrease the sync interval
for each puppetd to about 3 minutes and running each puppetd with
--listen option.

-- 
Best regards, Pavel

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: HTTP as a source for files

2009-07-01 Thread Greg

Just did a quick search. Looks like you can put an MD5 checksum into
the headers
with Apache quite easily: 
http://httpd.apache.org/docs/2.2/mod/core.html#contentdigest

Haven't played with it yet, but the doco does indicate a bit of a
performance hit as it
doesn't cache the checksums... Not suprising since content could be
dynamically
generated.

Greg

On Jul 1, 8:17 pm, David Schmitt  wrote:
> Robin Sheat wrote:
> > On Wednesday 01 July 2009 14:14:36 Greg wrote:
> >> The main question would be in terms of how to detect file changes
> >> without a full transfer - HTTP does provide some mechanisms for
> >> checking this, but I'm not sure if they would be adequate if scripting
> >> responses through HTTP...
>
> > I use S3 as a file source for my larger files, it allows contents to be
> > verified by MD5. My code for this is available here:
> >https://code.launchpad.net/~eythian/+junk/ec2facts
> > it's pretty basic, but gets the job done.
>
> > I mention this because a similar approach should be usable when backing with
> > HTTP and Apache. You could either do a HEAD request with 
> > 'If-Modified-Since',
> > and ensure that when you save the file, you update the file timestamp to 
> > that
> > supplied by apache, or check to see if apache will provide the MD5 (or
> > whatever) hash of the file contents. If the HEAD request indicates that 
> > there
> > is an updated version, then you pull it down using wget or similar.
>
> The two classical approaches to this are either properly configured ETag
> support or using the checksum as part of the filename and never refetch
> a file unless its filename has changed.
>
> Regards, DavidS
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Manage Directory/Purge Contents technique not working?

2009-07-01 Thread Joshua Barratt
Answering myself here,
Turns out the empty directory you point at DOES need to exist, contrary to
the explicit directions in the Wiki.

I'll update the wiki.

Josh

On Wed, Jul 1, 2009 at 8:42 AM, Joshua Barratt wrote:

>
> Hello all,
>
> I have done my best to RTFM, and this seems to be exactly the problem
> I have:
>
>
> http://reductivelabs.com/trac/puppet/wiki/FrequentlyAskedQuestions#i-want-to-manage-a-directory-and-purge-its-contents
>
> I am attempting to manage a lighttpd module, so I'd like to manage
> what's in the the various conf-* directories.
>
> Thus,
>
> file { ["$conf_dir/ssl", "$conf_dir/conf-include", "$conf_dir/conf-
> enabled", "$conf_dir/conf-available"]:
>ensure => directory,
>purge => true,
>recurse => true,
>force => true,
>source => "puppet:///lighttpd/empty",
> }
>
> Sadly, I get a string of errors:
>
> err: //lighttpd/File[/etc/lighttpd/ssl]: Failed to generate additional
> resources during transaction: None of the provided sources exist
> debug: //lighttpd/File[/etc/lighttpd/ssl]/checksum: Initializing
> checksum hash
> debug: //lighttpd/File[/etc/lighttpd/ssl]: Creating checksum {mtime}
> Fri Jun 19 01:39:34 -0700 2009
> err: //lighttpd/File[/etc/lighttpd/ssl]: Failed to retrieve current
> state of resource: No specified source was found from
> puppet:///lighttpd/empty
> err: //lighttpd/File[/etc/lighttpd/conf-enabled]: Failed to generate
> additional resources during transaction: None of the provided sources
> exist
> debug: //lighttpd/File[/etc/lighttpd/conf-enabled]/checksum:
> Initializing checksum hash
> debug: //lighttpd/File[/etc/lighttpd/conf-enabled]: Creating checksum
> {mtime}Fri Jun 19 13:58:35 -0700 2009
> err: //lighttpd/File[/etc/lighttpd/conf-enabled]: Failed to retrieve
> current state of resource: No specified source was found from
> puppet:///lighttpd/empty
>
> And so forth.
>
> And, these aren't 'ignorable' errors -- it doesn't actually clean the
> directory.
>
> This is pretty essential to have working, as otherwise you get what
> led me to try and track this down in the first place, which is a
> change to the puppet manifest leading to 2 different lighttpd configs
> trying to have the same Document Root directory. (When, in fact, one
> of them should have no longer existed.)
>
> I have the identical problem managing an /etc/monit.d/* directory's
> contents -- switching from, say, apache to lighttpd means monit will
> still have the /etc/monit.d/apache file in there and they'll be
> fighting over port 80. (Sadface.)
>
> The last message referring to this as a valid workaround seems to have
> been about 2 months ago, so it "should" still work?
>
> puppet: 0.24.8
> facter: 1.5.1
> ruby: ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux]
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: pkg_deploy on Macs - how to maintain "State"

2009-07-01 Thread gregnea...@mac.com

Comments in-line...

On Jun 29, 8:40 am, Nigel Kersten  wrote:
> So along these lines, there are two things I've been thinking about to
> improve package distribution with Puppet.
>
> a) Add a parameter called something like "creates" that would be an
> array of files and/or directories that a given package is expected to
> create. If any of these are missing, Puppet reinstalls the package,
> regardless of the state of the marker file.

munki's package metadata has an optional key for each installation
item called
"installs" which is an array of items that can be application bundles,
other bundle types, Info.plists, directories, and simple files. munki
can check for simple existence, or compare against a stored checksum,
or check version info in the case of bundles and Info.plists, and
based on this info, decide a package needs to be installed (or re-
installed). This provides a measure of "self-healing".

munki also uses the standard Receipts mechanism and the package
database - so if a package has been manually installed, munki won't
try again.


> b) Work on a "munki" provider.
>
> "Munki" is a project a friend of mine Greg Neagle has been working on
> to produce a good third party pkg based repository system for OS X.
>
> http://code.google.com/p/munki/
>
> Munki "knows" whether a given package is installed, flat or bundle
> based, so it could be used to solve this problem.
>
> The primary problem we have with the pkgdmg provider is that there is
> no necessary relationship between the dmg name and the pkg bundle
> identifier, and the marker file is based upon the former, whereas the
> package receipt is based upon the latter.

munki's metadata formalizes the relationship between pkg ids and the
install item (.dmg or flat package).

-Greg

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Installing applications using puppet

2009-07-01 Thread Macno

Once I did this class for mailscanner installation, can be an example.

class mailscanner {
$mailscannerver="MailScanner-4.69.9-2"
$mailscannerfile="$mailscannerver.rpm.tar.gz"

exec {
"mailscanner_prerequisites":
command => $operatingsystem ? {
default => "yum install -y wget tar gzip rpm-
build binutils glibc-devel gcc make",
},
onlyif => "test ! -f /usr/src/
$mailscannerfile",
}

exec {
"mailscanner_download":
command => "cd /usr/src ; wget
http://www.mailscanner.info/files/4/rpm/$mailscannerfile";,
onlyif => "test ! -f /usr/src/
$mailscannerfile",
require => Exec["mailscanner_prerequisites"],
}

exec {
"mailscanner_extract":
command => "cd /usr/src ; tar -zxvf
$mailscannerfile",
require => Exec["mailscanner_download"],
onlyif => "test ! -d /usr/src/
$mailscannerver",
}

exec {
"mailscanner_install":
command => "cd /usr/src/$mailscannerver ; ./
install.sh",
require => [
Exec["mailscanner_extract"],
Package["spamassassin"],
Package["clamav"]
],
unless => "rpm -qi mailscanner",
}

service { mailscanner:
name => "MailScanner",
ensure => running,
enable => true,
hasrestart => true,
hasstatus => true,
require => Exec["mailscanner_install"],
}

}




On Jun 25, 10:42 pm, Neil K  wrote:
> Hi all,
>
> I am pretty new to Puppet. My puppet master server is a RHEL 5 box and
> puppet client is a CentOS 5.3 vm. I have managed to configure puppet
> server to successfully install.and upgrade rpm based packages on the
> client machine. Is it possible to install noon-rpm based packages
> using puppet? Like packages comes as tar.gz such as web based
> applications?
>
> If it is possible, please provide any example manifests or any good
> documents that I can follow.
>
> Thanks,
> Neil
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Forcing puppetd ask puppemasterd for new changes

2009-07-01 Thread Macno

Well, since by default the puppet daemon checks the puppetmaster every
30 mins (can be reduced, but I don't think it's a great idea on large
installations) if you want to trigger a puppetrun whenever you want,
puppetrun is the way.
This can leave you the option to force the puppet daemon checks (or
cronjob runs) less frequently, which is a not a bad thing in most
cases.

my 2c

On Jul 1, 6:12 pm, Pavel Shevaev  wrote:
> On Wed, Jul 1, 2009 at 5:12 PM, Roberto Moral wrote:
>
> > In order to use puppetrun you need to run puppetd with the --listen
> > option, you will also need a namespaceauth.conf client side 
> > (http://reductivelabs.com/trac/puppet/wiki/NameSpaceAuth
> > )
>
> > fromhttp://reductivelabs.com/trac/puppet/wiki/PuppetExecutables#id7
>
> Thank you for the quick answer. So, I guess I have 2 options then:
>  a) running puppetd via cron, e.g: puppetd --no-daemonize --onetime
> --server master.host
>  b) running puppetd as a daemon with --listen option and force updates
> using puppetrun
>
> What's the most preferred way? Or maybe there is even a better option?
>
> --
> Best regards, Pavel
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Login to puppet on IRC

2009-07-01 Thread Avi Miller

Sharada wrote:
> Any one able to join puppet chat from :
> http://reductivelabs.com/home/irc/

Freenode has banned all web clients[1] except their own.

You need to use http://webchat.freenode.net instead.

cYa,
Avi



[1] http://blog.freenode.net/2009/06/new-freenode-webchat-and-why-to-use-it/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] nfs and autofs modules online.

2009-07-01 Thread Udo Waechter

Hey all.

I have just published my autofs and nfs modules. The autofs module has  
a define to manage the autofs-daemon and its mount-maps. That is still  
a little bit ugly, but works.
The nfs-module manages nfs-server(s) (FreeBSD/Debian) and clients  
(Debian/Darwin). The nfs-module requires the autofs module.


svn co https://svn.ikw.uos.de/dav/ikadmin/public/modules/nfs
svn co https://svn.ikw.uos.de/dav/ikadmin/public/modules/autofs

Heres the WebUI:
https://svn.ikw.uos.de/pubwebsvn/listing.php?repname=ikwadmin&path=%2Fpublic%2Fpuppet%2Fmodules%2F 
#_public_puppet_modules_



The nfs-export type that I have written is included in the nfs-module
see the threads below.
http://groups.google.com/group/puppet-dev/browse_thread/thread/703d53e2b052e956 
,

http://groups.google.com/group/puppet-dev/browse_thread/thread/56f0f70bfb6b9d6a

There are still some more features to implement, especially for the  
autofs module. Now that I have learned how to write types I will start  
rewriting the module soon.


Feedback welcome. Have fun.
udo.
--
---[ Institute of Cognitive Science @ University of Osnabrueck
---[ Albrechtstrasse 28, D-49076 Osnabrueck, 969-3362
---[ Documentation: https://doc.ikw.uni-osnabrueck.de





smime.p7s
Description: S/MIME cryptographic signature


[Puppet Users] Re: Puppet Implementation

2009-07-01 Thread Pete Emerson
If there is no default config file, you want to put a default config file in
place, but otherwise, leave it alone?

If so, one way to do it would be to use "unless" or "onlyif" in your recipe.
Something like this should work (untested by me), although there may be a
"better" way to do it:

file { "/etc/httpd/conf/httpd.conf":
ensure => file,
owner => root,
group => root,
mode => 0644,
content => template("/var/lib/puppet/files/httpd.conf"),
notify => Service[httpd],
unless => "ls /etc/httpd/conf/httpd.conf"
}


On Wed, Jul 1, 2009 at 10:47 AM, Tim Galyean  wrote:

>
> The company I work for is getting ready to deploy a large puppet
> configuration into an existing environment. The majority of the
> servers that this will be deployed on are web servers, however some of
> them are configured different from the rest.
>
> We have a set of default config files for apache, mysql and so forth,
> however my quesiton is:
>
> Is there a way to "tell" puppet to do a sort of comparison on the
> files, so that if one does not match the default config it is ignored
> and or not replaced with the default.
>
> Any help with this is greatly appreciated, Thanks ahead of time.
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Login to puppet on IRC

2009-07-01 Thread Pete Emerson
I'm getting the same via Firefox on Mac. Doesn't look like it's browser
specific.

On Wed, Jul 1, 2009 at 10:35 AM, Sharada  wrote:

> Any one able to join puppet chat from :
> http://reductivelabs.com/home/irc/
>
> I tried from IE and Firefox. It says ' Login Terminated'
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Manage Directory/Purge Contents technique not working?

2009-07-01 Thread Joshua Barratt

Hello all,

I have done my best to RTFM, and this seems to be exactly the problem
I have:

http://reductivelabs.com/trac/puppet/wiki/FrequentlyAskedQuestions#i-want-to-manage-a-directory-and-purge-its-contents

I am attempting to manage a lighttpd module, so I'd like to manage
what's in the the various conf-* directories.

Thus,

file { ["$conf_dir/ssl", "$conf_dir/conf-include", "$conf_dir/conf-
enabled", "$conf_dir/conf-available"]:
ensure => directory,
purge => true,
recurse => true,
force => true,
source => "puppet:///lighttpd/empty",
}

Sadly, I get a string of errors:

err: //lighttpd/File[/etc/lighttpd/ssl]: Failed to generate additional
resources during transaction: None of the provided sources exist
debug: //lighttpd/File[/etc/lighttpd/ssl]/checksum: Initializing
checksum hash
debug: //lighttpd/File[/etc/lighttpd/ssl]: Creating checksum {mtime}
Fri Jun 19 01:39:34 -0700 2009
err: //lighttpd/File[/etc/lighttpd/ssl]: Failed to retrieve current
state of resource: No specified source was found from puppet:///lighttpd/empty
err: //lighttpd/File[/etc/lighttpd/conf-enabled]: Failed to generate
additional resources during transaction: None of the provided sources
exist
debug: //lighttpd/File[/etc/lighttpd/conf-enabled]/checksum:
Initializing checksum hash
debug: //lighttpd/File[/etc/lighttpd/conf-enabled]: Creating checksum
{mtime}Fri Jun 19 13:58:35 -0700 2009
err: //lighttpd/File[/etc/lighttpd/conf-enabled]: Failed to retrieve
current state of resource: No specified source was found from
puppet:///lighttpd/empty

And so forth.

And, these aren't 'ignorable' errors -- it doesn't actually clean the
directory.

This is pretty essential to have working, as otherwise you get what
led me to try and track this down in the first place, which is a
change to the puppet manifest leading to 2 different lighttpd configs
trying to have the same Document Root directory. (When, in fact, one
of them should have no longer existed.)

I have the identical problem managing an /etc/monit.d/* directory's
contents -- switching from, say, apache to lighttpd means monit will
still have the /etc/monit.d/apache file in there and they'll be
fighting over port 80. (Sadface.)

The last message referring to this as a valid workaround seems to have
been about 2 months ago, so it "should" still work?

puppet: 0.24.8
facter: 1.5.1
ruby: ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Puppet Implementation

2009-07-01 Thread Tim Galyean

The company I work for is getting ready to deploy a large puppet
configuration into an existing environment. The majority of the
servers that this will be deployed on are web servers, however some of
them are configured different from the rest.

We have a set of default config files for apache, mysql and so forth,
however my quesiton is:

Is there a way to "tell" puppet to do a sort of comparison on the
files, so that if one does not match the default config it is ignored
and or not replaced with the default.

Any help with this is greatly appreciated, Thanks ahead of time.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Manage Directory/Purge Contents technique not working?

2009-07-01 Thread Joshua Barratt
Hello all,
I attempted to RTFM on this one but to no avail.
(Also, this may be a repost? Google appeared to eat the first email I sent
several hours ago.)

I am trying to clean out the /etc/lighttpd/sites-enabled directory using
this technique:

http://reductivelabs.com/trac/puppet/wiki/FrequentlyAskedQuestions#i-want-to-manage-a-directory-and-purge-its-contents

This is needed because, let us say, a node goes from running 'myoldsite' to
running 'mynewsite'. If I don't purge the sites-enabled directory, I'll try
to load both lighttpd configs simultaneously, which conflict.

Here's my code:

file {
   ["$conf_dir/ssl", "$conf_dir/conf-include", "$conf_dir/conf-enabled",
"$conf_dir/conf-available"]:
ensure => directory,
purge => true,
recurse => true,
force => true,
source => "puppet:///lighttpd/empty",
}
}

I get errors like this:
err: //lighttpd/File[/etc/lighttpd/conf-enabled]: Failed to generate
additional resources during transaction: None of the provided sources exist
debug: //lighttpd/File[/etc/lighttpd/conf-enabled]/checksum: Initializing
checksum hash
debug: //lighttpd/File[/etc/lighttpd/conf-enabled]: Creating checksum
{mtime}Tue Jun 30 19:46:37 -0700 2009
err: //lighttpd/File[/etc/lighttpd/conf-enabled]: Failed to retrieve current
state of resource: No specified source was found from
puppet:///lighttpd/empty

I need to do the same thing for 'monit', so I tested this in my monit module
as well, with the identical (i.e. errors + doesn't work) results.

The above FAQ link was handed out as recently as May 27th-ish, so if this
technique is deprecated, it hasn't been for long.

And the salient bits:
puppetversion => 0.24.8
rubyversion => 1.8.7
facterversion => 1.5.1

Would appreciate any guidance.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Login to puppet on IRC

2009-07-01 Thread Sharada
Any one able to join puppet chat from :
http://reductivelabs.com/home/irc/

I tried from IE and Firefox. It says ' Login Terminated'

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Multiple default providers for service: init, base; using init

2009-07-01 Thread Pete Emerson
I have a bunch of CentOS machines. In the process of puppetizing one of them
I'm getting this warning (more complete debug info at the end):

warning: Found multiple default providers for service: init, base; using
init
info: /Service[gmond]: Provider init does not support features enableable;
not managing attribute enable

Given that all of my instances are running the same OS and should be
identical, and none of my other instances are exhibiting this problem,
something on this instance must be slightly different.

I've run the client in debug mode (see below) and the puppetmaster in debug
mode, but haven't seen anything that is causing this issue (like the default
PATH, for example). How do I best go about figuring out what is going on and
fixing it?

/usr/sbin/puppetd --test --debug --verbose

debug: Creating default schedules
...
debug: Finishing transaction 23456258353740 with 0 changes
debug: Loaded state in 0.03 seconds
debug: Retrieved facts in 0.16 seconds
debug: Retrieving catalog
debug: Calling puppetmaster.getconfig
debug: Retrieved catalog in 1.02 seconds
debug: Puppet::Network::Client::File: defining fileserver.describe
debug: Puppet::Network::Client::File: defining fileserver.list
debug: Puppet::Network::Client::File: defining fileserver.retrieve
debug: Puppet::Type::Package::ProviderRpm: Executing '/bin/rpm --version'
debug: Puppet::Type::Package::ProviderYum: Executing '/bin/rpm --version'
debug: Puppet::Type::Package::ProviderUrpmi: Executing '/bin/rpm -ql rpm'
debug: Puppet::Type::Package::ProviderAptrpm: Executing '/bin/rpm -ql rpm'
debug: Puppet::Type::Package::ProviderHpux: file /usr/sbin/swinstall does
not exist
debug: Puppet::Type::Package::ProviderAppdmg: file /Library/Receipts does
not exist
debug: Puppet::Type::Package::ProviderFink: file /sw/bin/fink does not exist
debug: Puppet::Type::Package::ProviderUp2date: file /usr/sbin/up2date-nox
does not exist
debug: Puppet::Type::Package::ProviderDpkg: file /usr/bin/dpkg does not
exist
debug: Puppet::Type::Package::ProviderUrpmi: file urpmi does not exist
debug: Puppet::Type::Package::ProviderGem: file gem does not exist
debug: Puppet::Type::Package::ProviderPorts: file /usr/sbin/pkg_info does
not exist
debug: Puppet::Type::Package::ProviderSun: file /usr/bin/pkginfo does not
exist
debug: Puppet::Type::Package::ProviderPkgdmg: file /Library/Receipts does
not exist
debug: Puppet::Type::Package::ProviderAptitude: file /usr/bin/aptitude does
not exist
debug: Puppet::Type::Package::ProviderSunfreeware: file pkg-get does not
exist
debug: Puppet::Type::Package::ProviderOpenbsd: file pkg_info does not exist
debug: Puppet::Type::Package::ProviderDarwinport: file /opt/local/bin/port
does not exist
debug: Puppet::Type::Package::ProviderAptrpm: file apt-get does not exist
debug: Puppet::Type::Package::ProviderFreebsd: file /usr/sbin/pkg_info does
not exist
debug: Puppet::Type::Package::ProviderApt: file /usr/bin/apt-get does not
exist
debug: Puppet::Type::Package::ProviderRug: file /usr/bin/rug does not exist
debug: Puppet::Type::Package::ProviderPortage: file /usr/bin/emerge does not
exist
debug: Puppet::Type::Package::ProviderApple: file /Library/Receipts does not
exist
debug: Puppet::Type::Service::ProviderRunit: file /usr/bin/sv does not exist
debug: Puppet::Type::Service::ProviderDaemontools: file /usr/bin/svc does
not exist
debug: Puppet::Type::Service::ProviderDebian: file /usr/sbin/update-rc.d
does not exist
warning: Found multiple default providers for service: init, base; using
init
info: /Service[gmond]: Provider init does not support features enableable;
not managing attribute enable
debug: Puppet::Type::User::ProviderNetinfo: file nireport does not exist
debug: Puppet::Type::User::ProviderPw: file pw does not exist
debug: Puppet::Type::Group::ProviderNetinfo: file nireport does not exist
debug: Puppet::Type::Group::ProviderPw: file /usr/sbin/pw does not exist
info: /Service[syslog-ng]: Provider init does not support features
enableable; not managing attribute enable
debug: Creating default schedules
debug: Finishing transaction 23456269726200 with 0 changes
info: Caching catalog at /var/lib/puppet/localconfig.yaml
notice: Starting catalog run
...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Workstations and Certs

2009-07-01 Thread Michael Semcheski

On Wed, Jul 1, 2009 at 12:02 PM, Kurt Engle wrote:
> Wouldn't I achieve the same outcome with using a single cert for all
> machines without the need for special scripts to delete certs from the
> server and delete files from the client? Also, with respect to autosign...
> would I really be able to turn it off using the SSH method below?

The client creates a cert and then gives it to the server.  You tell
the server to authorize it or not.  But that process doesn't
necessarilly require manual intervention.  It is very scriptable.

The ssh method I described would be able to do all of that, and it
would probably be simpler to implement than you realize, assuming the
freshly imaged machines could ssh to the puppetmaster.

The script would be something like this...

HOSTNAME=`hostname -f`
ssh puppetmaster "/usr/sbin/puppetca --clear $HOSTNAME"
puppetd -w 90
ssh puppetmaster "/usr/sbin/puppetca -s $HOSTNAME"


Then add a module that removes that script from the machine.

In the example I gave above, I can't remember the specific options
that puppetca requires, but I think its close.

Again, all you need to do is add the ssh key to the base image, and
add it to the authorized_keys on the puppetmaster.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Workstations and Certs

2009-07-01 Thread Nigel Kersten

On Wed, Jul 1, 2009 at 9:02 AM, Kurt Engle wrote:
> Thanks for the suggestions.
>
> Wouldn't I achieve the same outcome with using a single cert for all
> machines without the need for special scripts to delete certs from the
> server and delete files from the client?

I like being able to still revoke an individual certificate centrally.

> Also, with respect to autosign...
> would I really be able to turn it off using the SSH method below? Doesn't
> the client still have to ask the server for a cert after it has been
> re-imaged? With a single cert, it seems that the client would already have a
> cert that I have distributed with the image and therefore, would not have to
> ask for a cert and autosign could be turned off.
>

I think the plan would be to ship an ssh key on the image and do:

* ssh to puppet CA, generate cert
* copy certificate(s) to client

Then the client wouldn't need to ask for a certificate.

> -kurt
>
>
> On Tue, Jun 30, 2009 at 4:47 PM, Nigel Kersten  wrote:
>>
>> On Tue, Jun 30, 2009 at 4:32 PM, Michael Semcheski
>> wrote:
>> >
>> > On Tue, Jun 30, 2009 at 6:36 PM, Kurt Engle wrote:
>> >> Our imaging process takes an OS base image with a few apps that include
>> >> Puppet and Facter and installs it on the make. This over the network.
>> >> When
>> >> the Mac reboots it sets the hostname of the computer to the Mac's
>> >> serial
>> >> number and auto starts puppet. I do have my puppetmaster (CA) set to
>> >> autosign certs iliminating my intervention. This process is working
>> >> well.
>> >
>> > What if you add an ssh key to the base OS image, and a script to be
>> > run that contacts the puppet server using the ssh key, and clears any
>> > cert that may exist for that client.  (It could also add the newly
>> > created cert..)  You can set the ssh server to recognize that when
>> > that key (from the base image) is used, the only command that may be
>> > run is /usr/sbin/puppetca.
>> >
>> > That way, when the machine is reimaged, after its first boot it takes
>> > care of the certification issue.  Then, once puppet is running on the
>> > machine, you could have it remove the ssh key and the startup script.
>>
>> I like this idea. You could even turn off autosign then.
>>
>>
>>
>> --
>> Nigel Kersten
>> nig...@google.com
>> System Administrator
>> Google, Inc.
>>
>>
>
>
> >
>



-- 
Nigel Kersten
nig...@google.com
System Administrator
Google, Inc.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Forcing puppetd ask puppemasterd for new changes

2009-07-01 Thread Pavel Shevaev

On Wed, Jul 1, 2009 at 5:12 PM, Roberto Moral wrote:
>
> In order to use puppetrun you need to run puppetd with the --listen
> option, you will also need a namespaceauth.conf client side 
> (http://reductivelabs.com/trac/puppet/wiki/NameSpaceAuth
> )
>
> from http://reductivelabs.com/trac/puppet/wiki/PuppetExecutables#id7

Thank you for the quick answer. So, I guess I have 2 options then:
 a) running puppetd via cron, e.g: puppetd --no-daemonize --onetime
--server master.host
 b) running puppetd as a daemon with --listen option and force updates
using puppetrun

What's the most preferred way? Or maybe there is even a better option?

-- 
Best regards, Pavel

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Workstations and Certs

2009-07-01 Thread Kurt Engle
Thanks for the suggestions.

Wouldn't I achieve the same outcome with using a single cert for all
machines without the need for special scripts to delete certs from the
server and delete files from the client? Also, with respect to autosign...
would I really be able to turn it off using the SSH method below? Doesn't
the client still have to ask the server for a cert after it has been
re-imaged? With a single cert, it seems that the client would already have a
cert that I have distributed with the image and therefore, would not have to
ask for a cert and autosign could be turned off.

-kurt


On Tue, Jun 30, 2009 at 4:47 PM, Nigel Kersten  wrote:

>
> On Tue, Jun 30, 2009 at 4:32 PM, Michael Semcheski
> wrote:
> >
> > On Tue, Jun 30, 2009 at 6:36 PM, Kurt Engle wrote:
> >> Our imaging process takes an OS base image with a few apps that include
> >> Puppet and Facter and installs it on the make. This over the network.
> When
> >> the Mac reboots it sets the hostname of the computer to the Mac's serial
> >> number and auto starts puppet. I do have my puppetmaster (CA) set to
> >> autosign certs iliminating my intervention. This process is working
> well.
> >
> > What if you add an ssh key to the base OS image, and a script to be
> > run that contacts the puppet server using the ssh key, and clears any
> > cert that may exist for that client.  (It could also add the newly
> > created cert..)  You can set the ssh server to recognize that when
> > that key (from the base image) is used, the only command that may be
> > run is /usr/sbin/puppetca.
> >
> > That way, when the machine is reimaged, after its first boot it takes
> > care of the certification issue.  Then, once puppet is running on the
> > machine, you could have it remove the ssh key and the startup script.
>
> I like this idea. You could even turn off autosign then.
>
>
>
> --
> Nigel Kersten
> nig...@google.com
> System Administrator
> Google, Inc.
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: divide puppet structure

2009-07-01 Thread Arnau Bria

On Wed, 1 Jul 2009 12:02:40 +0200
Arnau Bria wrote:

> Hi all,
> something like:
> 
> /puppet/manifests/
>   site.pp:
>   include other_servicesA/site.pp
>   include other_servicesB/site.pp
>   import "module A"
> 
> /puppet/manifests/other_servicesA:
>   site.pp:
>   import "module B"
> 
> /puppet/manifests/other_servicesB:
>   site.pp:
>   import "module C"
> 
> Is this syntax correct?
> 
> And, what about nodes? same dir structure is correct?
> 
> /puppet/manifests/
>   nodes.pp:
>   include other_servicesA/node.pp
>   include other_servicesB/node.pp
> 

I can answer myself. My first test worked fine.

TIA,
Arnau

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Stored configs, external node classifier, classes/modules not going away

2009-07-01 Thread Jason Antman

Hi,

I'm using stored configs (MySQL) and an external node classifier script
(just a simple web tool and MySQL backend). It seems that it's
impossible for me to remove a module from a given host - if I attempt to
do so, despite it being removed from the node classifier database
perfectly, the module is still applied to the host in question.

This is becoming quite a problem, as I accidentally applied my
puppet-client module (including distribution of
/etc/puppet/namespaceauth.conf) to my puppetmaster, and no matter what I
do in my node classifier, puppetd on the puppetmaster insists on putting
namespaceauth.conf back.

Any ideas? Is this some sort of bug in stored configs where I have to
clean out the database every time I remove something? (and, on another
note, the
kill_node_in_storeconfigs_db.rb script from the wiki doesn't seem to
work for me, it just exits with ": No such file or directory").

Thanks,
Jason Antman

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] firewall ports to be opened in between client and master?

2009-07-01 Thread Jason Amato


I have a firewall in between a number of client servers and the
puppetmaster.
What ports does my firewall guy need to open up in order for push
(puppetrun) and pulls to work and in which direction (master -> client
or vice versa), please?

I see ports:
1110 client open up on the client during a push
8139 runs with the puppetd daemon.
8140 runs with puppetmasterd.

Thanks in advance!

Jason Amato
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Forcing puppetd ask puppemasterd for new changes

2009-07-01 Thread Roberto Moral

In order to use puppetrun you need to run puppetd with the --listen  
option, you will also need a namespaceauth.conf client side 
(http://reductivelabs.com/trac/puppet/wiki/NameSpaceAuth 
)

from http://reductivelabs.com/trac/puppet/wiki/PuppetExecutables#id7

puppetd
Puppet's agent. It does not know how to find or compile manifests and  
is only useful for contacting a central server. Note that there are  
multiple clients that can be loaded within this agent, and the agent  
can listen for incoming connections. If you start it with --listen, by  
default it will accept triggers from puppetrun, but puppetd will  
refuse to start if listen is enabled and it has no namespaceauth.conf  
file. It can load other handlers; check its documentation for more  
detail.




On Wednesday,Jul 1, 2009, at Wednesday,Jul 1, 20098:36 AM, Pavel  
Shevaev wrote:

>
> Guys, I seriously could not find this topic in the documentation.
>
> What do you do when you need to force puppetd hosts get the new
> settings from puppetmasterd?
>
> What I found was only sending USR1 signal to the client process in
> order to make it refresh its configuration from the master.
> But I find it a bit inconvenient for a large amount of hosts.
>
> There is also puppetrun which, if I understand correctly, can make
> specific puppet hosts check for new changes.
> However it seems it tries to connect to port 8139 and for some reason
> my puppetd hosts are not listening on this port, while puppetd is
> running for sure.
> I believe it can be configured somehow, I just thought it was the
> default option...
>
> And finally, how often does puppetd running in the daemon mode ask
> puppetmasterd for new chages? Does it do it more than once?
>
> puppetd --help says(I'm using 0.24.4 on Gentoo):
>
> ==
> Synopsis
> 
> 
> Currently must be run out periodically, using cron or something  
> similar.
>
> ===
>
> Is it true?
>
> Thanks in advance.
>
> -- 
> Best regards, Pavel
>
> >


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Forcing puppetd ask puppemasterd for new changes

2009-07-01 Thread Pavel Shevaev

Guys, I seriously could not find this topic in the documentation.

What do you do when you need to force puppetd hosts get the new
settings from puppetmasterd?

What I found was only sending USR1 signal to the client process in
order to make it refresh its configuration from the master.
But I find it a bit inconvenient for a large amount of hosts.

There is also puppetrun which, if I understand correctly, can make
specific puppet hosts check for new changes.
However it seems it tries to connect to port 8139 and for some reason
my puppetd hosts are not listening on this port, while puppetd is
running for sure.
I believe it can be configured somehow, I just thought it was the
default option...

And finally, how often does puppetd running in the daemon mode ask
puppetmasterd for new chages? Does it do it more than once?

puppetd --help says(I'm using 0.24.4 on Gentoo):

==
Synopsis


Currently must be run out periodically, using cron or something similar.

===

Is it true?

Thanks in advance.

-- 
Best regards, Pavel

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: HTTP as a source for files

2009-07-01 Thread David Schmitt

Robin Sheat wrote:
> On Wednesday 01 July 2009 14:14:36 Greg wrote:
>> The main question would be in terms of how to detect file changes
>> without a full transfer - HTTP does provide some mechanisms for
>> checking this, but I'm not sure if they would be adequate if scripting
>> responses through HTTP...
> 
> I use S3 as a file source for my larger files, it allows contents to be 
> verified by MD5. My code for this is available here: 
> https://code.launchpad.net/~eythian/+junk/ec2facts
> it's pretty basic, but gets the job done.
> 
> I mention this because a similar approach should be usable when backing with 
> HTTP and Apache. You could either do a HEAD request with 'If-Modified-Since', 
> and ensure that when you save the file, you update the file timestamp to that 
> supplied by apache, or check to see if apache will provide the MD5 (or 
> whatever) hash of the file contents. If the HEAD request indicates that there 
> is an updated version, then you pull it down using wget or similar. 

The two classical approaches to this are either properly configured ETag 
support or using the checksum as part of the filename and never refetch 
a file unless its filename has changed.


Regards, DavidS

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: HTTP as a source for files

2009-07-01 Thread Robin Sheat
On Wednesday 01 July 2009 14:14:36 Greg wrote:
> The main question would be in terms of how to detect file changes
> without a full transfer - HTTP does provide some mechanisms for
> checking this, but I'm not sure if they would be adequate if scripting
> responses through HTTP...

I use S3 as a file source for my larger files, it allows contents to be 
verified by MD5. My code for this is available here: 
https://code.launchpad.net/~eythian/+junk/ec2facts
it's pretty basic, but gets the job done.

I mention this because a similar approach should be usable when backing with 
HTTP and Apache. You could either do a HEAD request with 'If-Modified-Since', 
and ensure that when you save the file, you update the file timestamp to that 
supplied by apache, or check to see if apache will provide the MD5 (or 
whatever) hash of the file contents. If the HEAD request indicates that there 
is an updated version, then you pull it down using wget or similar. 

-- 
Robin  JabberID: 
http://www.kallisti.net.nz/blog   |||   http://identi.ca/eythian

PGP Key 0xA99CEB6D = 5957 6D23 8B16 EFAB FEF8  7175 14D3 6485 A99C EB6D



signature.asc
Description: This is a digitally signed message part.


[Puppet Users] divide puppet structure

2009-07-01 Thread Arnau Bria

Hi all,

I'll open my puppet server to other services in my office, but I'd like
to allow each service to only be able to modify its confs. I know
there are a couple of files that i cannot split, like fileserver or
autosing or puppet.conf, but it's a minor problem.

I'm worried about site and nodes, basically.

I have my module imports in site.pp, but, may I include other site.pp
in main site.pp?

something like:

/puppet/manifests/
site.pp:
include other_servicesA/site.pp
include other_servicesB/site.pp
import "module A"

/puppet/manifests/other_servicesA:
site.pp:
import "module B"

/puppet/manifests/other_servicesB:
site.pp:
import "module C"

Is this syntax correct?

And, what about nodes? same dir structure is correct?

/puppet/manifests/
nodes.pp:
include other_servicesA/node.pp
include other_servicesB/node.pp

is there any other way for doing so?


TIA,
Arnau

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Variable named "memory" in templates

2009-07-01 Thread Peter Meier

> I'll file this as a bug then, probably later today.

yeah, good idea.

cheers pete


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Variable named "memory" in templates

2009-07-01 Thread Thomas Bellman

David Schmitt wrote:

> Peter Meier wrote:

>> it looks like memory is somehow a special variable (dunno why), but if
>> you don't name your variable memory, it works.

> $memory is a fact containing amount of memory on the client.

No, it isn't.  At least not a fact that facter (version 1.5.5) reports.
And $memory isn't available in the manifests unless I set it explicitly.

And even if it were, variable assignments in the manifests override
facts.  Compare the lines output from:

 node default
 {
 $memorysize = 4711
 notice(inline_template("memorysize = <%= memorysize %>"))
 notice(inline_template("memory (template) = <%= memory %>"))
 notice("\$memory = <$memory>")
 notice("\$memoryfree = <$memoryfree>")
 }

$memorysize *is* a fact from facter, and the above code tells me
"memorysize = 4711" (while facter says it is "1.93 GB").  On the
other hand, it says "$memory = <>", but "memory (template) = 105452"
(or some similar, but varying, figure).


I can of course work around it by doing '$xmemory = $memory' inside
my definition (I don't want to have a strange name as parameter to
the definition), and accessing "xmemory" from the template, but it's
not particularly pretty...

I'll file this as a bug then, probably later today.


/Bellman

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Variable named "memory" in templates

2009-07-01 Thread David Schmitt

Peter Meier wrote:
> Hi
> 
 Why can't I access $memory like other variables?  From where does the
 value I do get come from?
>>> it looks like memory is somehow a special variable (dunno why), but if
>>> you don't name your variable memory, it works.
>> $memory is a fact containing amount of memory on the client.
> 
> hmm I thought that, but this one isn't displayed with facter:
> 
> # facter | grep memory
> memoryfree => 933.89 MB
> memorysize => 3.36 GB
> 
> and if I do:
> 
> # cat foo.pp
> notice("$memory")
> notice(inline_template("<%= memory %>"))
> 
> # puppet foo.pp
> notice: Scope(Class[main]):
> notice: Scope(Class[main]): 36312
> 
> I don't understand the difference.

d'oh. Perhaps this comes from ruby then?


Regards, DavidS

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: HTTP as a source for files

2009-07-01 Thread Julian Simpson

I like the idea of HTTP if it gets me closer to stubbing out the
puppetmaster when I'm developing manifests.  Thinking I could stand up
a webrick server to resolve all the file sources.  Of course, I'd use
Apache or Nginx in production.

J.

2009/7/1 Marc Fournier :
>
>
> Hello,
>
>> I've been looking into having Puppet deploy some larger files and I'm
>> noticing that it ties up puppetmasters quite a bit and can often
>> result in a timeout if the file is too large. Before I submit a
>> feature request for a http method for file sources, I would throw it
>> out to the group and see if anyone had any thoughts on it.
>>
>> [...]
>
> I'm convinced we could benefit from having other file sources than
> file:// and puppet://. There already is a (similar) ticket for this:
> http://projects.reductivelabs.com/issues/184
>
> You might also be interested by Luke Kanies's reply to more or less the
> same question on puppet-dev a few weeks ago:
> http://groups.google.com/group/puppet-dev/browse_thread/thread/275658354cd45bab/60b7672fbc35c371
>
> I've started working on this (but unfortunately got preempted and now
> stalled). It shouldn't be too difficult to implement, but as far as I'm
> concerned, my knowledge of ruby is currently too low to do this
> efficiently :-(
>
> Marc
>
>
>
> >
>



-- 
Julian Simpson
Software Build and Deployment
http://www.build-doctor.com

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Variable named "memory" in templates

2009-07-01 Thread Peter Meier

Hi

>>> Why can't I access $memory like other variables?  From where does the
>>> value I do get come from?
>>
>> it looks like memory is somehow a special variable (dunno why), but if
>> you don't name your variable memory, it works.
>
> $memory is a fact containing amount of memory on the client.

hmm I thought that, but this one isn't displayed with facter:

# facter | grep memory
memoryfree => 933.89 MB
memorysize => 3.36 GB

and if I do:

# cat foo.pp
notice("$memory")
notice(inline_template("<%= memory %>"))

# puppet foo.pp
notice: Scope(Class[main]):
notice: Scope(Class[main]): 36312

I don't understand the difference.

cheers pete

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Variable named "memory" in templates

2009-07-01 Thread David Schmitt

Peter Meier wrote:
> Hi
> 
>> Why can't I access $memory like other variables?  From where does the
>> value I do get come from?
> 
> it looks like memory is somehow a special variable (dunno why), but if
> you don't name your variable memory, it works.

$memory is a fact containing amount of memory on the client.


Regards, DavidS

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: yum provider executes rpm -e?

2009-07-01 Thread David Schmitt

Peter Meier wrote:
> Hi
> 
>>> yeah I also thought that. On the other side installing things (which
>>> will install a bunch of dependecies) is also an unexpected result
>>> somehow, as the dependencies aren't managed by puppet. For sure this
>>> result isn't that worse as uninstall, but I don't think that this is
>>> really an argument, however I agree that in this case we simply also not
>>> care. But why do we care on uninstall?
>> The basic issue is that puppet doesn't know about dependencies (not sure
>> it should), but once you throw 'yum -y erase' into the mix, it becomes
>> very easy to write inconsistent manifests, where a package erase removes
>> a package that is explicitly mentioned by the manifest for install -
>> sure the next puppet run will then install that package again, but in
>> the meantime, you have a very broken system as the 'yum erase file'
>> example shows.
> 
> 
> yeah, which might be definitely worse than installing a package we'd
> like to have uninstalled. On the other side it's an inconsistency we
> can't solve using yum without declaring all yum dependencies within
> puppet, which would lead us to simply use rpm... ;)

FWIW: the apt provider removes packages+deps. And yes, this already lead 
to problems. It is even worse if one doesn't control all "important" 
packages with puppet, since those might be removed without anybody 
noticing...


Regards, DavidS

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: HTTP as a source for files

2009-07-01 Thread Marc Fournier


Hello,

> I've been looking into having Puppet deploy some larger files and I'm
> noticing that it ties up puppetmasters quite a bit and can often
> result in a timeout if the file is too large. Before I submit a
> feature request for a http method for file sources, I would throw it
> out to the group and see if anyone had any thoughts on it.
> 
> [...]

I'm convinced we could benefit from having other file sources than
file:// and puppet://. There already is a (similar) ticket for this:
http://projects.reductivelabs.com/issues/184

You might also be interested by Luke Kanies's reply to more or less the
same question on puppet-dev a few weeks ago:
http://groups.google.com/group/puppet-dev/browse_thread/thread/275658354cd45bab/60b7672fbc35c371

I've started working on this (but unfortunately got preempted and now
stalled). It shouldn't be too difficult to implement, but as far as I'm
concerned, my knowledge of ruby is currently too low to do this
efficiently :-(

Marc



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: Unable to get storedconfigs to work

2009-07-01 Thread Felix Schäfer


Am 01.07.2009 um 03:24 schrieb Greg:

> I've gotten it working with 2.3.2... But I did have to put in the
> require lines
> as was mentioned in a previous message...


I must say that I'm not very happy with this solution as it seems more  
hackish than anything, but it does work when adding the few require  
lines. Anyhow, this should be fixed to work with the current stable  
rails, shall I reopen #2041 or file a new bug?

Felix

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: HTTP as a source for files

2009-07-01 Thread Peter Meier

Hi

> I've been looking into having Puppet deploy some larger files and I'm
> noticing that it ties up puppetmasters quite a bit and can often
> result in a timeout if the file is too large. Before I submit a
> feature request for a http method for file sources, I would throw it
> out to the group and see if anyone had any thoughts on it.

yes, this is the main reason why >= 0.25.0 will use REST over XMLRPC,
which requires to escape all the data of files into an xml format.

The current rule of thumb is to not deploy larger files with puppet <
0.25.0 . :(

> [...]
>
> So what does everyone think? Is a HTTP source for files feasable?

as far as I understood with REST this all would be possible and Luke
explicitly  mentioned that it would even be possible to natively serve
files by apache (for example), so no ruby stack overhead at all.
However I didn't yet see any example which do that, nor how to setup.
But it's definately already the idea, if you don't even find a ticket
for that.

Maybe try out 0.25.0 beta 2 to see if it works better, it definitely
should and according to reports it does!

cheers pete

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---



[Puppet Users] Re: puppetmaster behind NAT

2009-07-01 Thread Peter Meier

Hi

>> the puppet masters cert and CA needs to contain the public FQDN as well.
>> use certnames (see ConfigurationReference [1]) to include both domains,
>> local and public. This will mean that you need to regenerate the certs,
>> as well to resign all clients.
> 
> 
> Thanks again, it worked just fine. BTW, there is a typo, the required
> configuration option is called 'certdnsnames'. Here is what I did:

great! :) Yeah, actually I didn't look in the reference and just guessed
the name from my memory. Sometimes a few bytes get lost in my brain... ;)


cheers pete

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~--~~~~--~~--~--~---