Re: [Puppet Users] puppetlabs-firewall scope

2012-12-08 Thread Louis Coilliot
Thanks a lot. Indeed, in that way it leaves my untargeted nodes alone.
And I feel it's cleaner than putting things in the site.pp.

However I still have one little problem : at first application on some
fw rules on a node with puppet, the purge of preexisting rules is slow,
blocking the network temporarily.

Hopefully it comes back after a while.

I don't have this annoyance if I 'iptables -F' first.

See an example below.

I can work with that but if you have a workaround you're welcome.

Louis Coilliot

Info: Applying configuration version '1354997226'
/Firewall[ fe701ab7ca74bd49f13b9f0ab39f3254]/ensure: removed
/Firewall[ a627067f779aaa7406fa9062efa4550e]/ensure: removed
/Firewall[ 49bcd611c61bdd18b235cea46ef04fae]/ensure: removed
Error: /File[nagios.vim]: Could not evaluate: Connection timed out -
connect(2) Could not retrieve file metadata for
puppet:///modules/nagios/nagios.vim: Connection timed out - connect(2)
Error: /File[nagiosvim-install.sh]: Could not evaluate: Connection timed
out - connect(2) Could not retrieve file metadata for
puppet:///modules/nagios/nagiosvim-install.sh: Connection timed out -
connect(2)
Error: /File[/etc/vimrc]: Could not evaluate: Connection timed out -
connect(2) Could not retrieve file metadata for
puppet:///modules/vim/vimrc: Connection timed out - connect(2)
/Firewall[ b205c9394b2980936dac53f8b62e38e7]/ensure: removed
/Firewall[000 accept all icmp]/ensure: created
Info: /Firewall[000 accept all icmp]: Scheduling refresh of
Exec[persist-firewall]
/Firewall[ d53829245128968bfa101d5214694702]/ensure: removed
/Firewall[001 accept all to lo interface]/ensure: created
Info: /Firewall[001 accept all to lo interface]: Scheduling refresh of
Exec[persist-firewall]
/Firewall[002 accept related established rules]/ensure: created
Info: /Firewall[002 accept related established rules]: Scheduling
refresh of Exec[persist-firewall]
/Firewall[003 accept SSH]/ensure: created
Info: /Firewall[003 accept SSH]: Scheduling refresh of
Exec[persist-firewall]
/Firewall[999 drop all on INPUT eventually]/ensure: created
Info: /Firewall[999 drop all on INPUT eventually]: Scheduling refresh of
Exec[persist-firewall]
/Firewall[999 drop all on FORWARD eventually]/ensure: created
Info: /Firewall[999 drop all on FORWARD eventually]: Scheduling refresh
of Exec[persist-firewall]
/Stage[main]/Firewall/Exec[persist-firewall]: Triggered 'refresh' from 6
events
Finished catalog run in 196.45 seconds


Le 07/12/2012 20:34, Shawn Foley a écrit :
 I created a firewall module. In firewall/manifests/init.pp i have the
 following.

 class firewall {

   ## Always persist firewall rules
   exec { 'persist-firewall':
 command  = '/sbin/iptables-save  /etc/sysconfig/iptables',
 refreshonly = true,
   }

   ## These defaults ensure that the persistence command is executed after
   ## every change to the firewall, and that pre  post classes are run
 in the
   ## right order to avoid potentially locking you out of your box
 during the
   ## first puppet run.
   Firewall {
 notify  = Exec['persist-firewall'],
 before  = Class['firewall::post'],
 require = Class['firewall::pre'],
   }
   Firewallchain {
 notify  = Exec['persist-firewall'],
   }

   ## Purge unmanaged firewall resources
   ##
   ## This will clear any existing rules, and make sure that only rules
   ## defined in puppet exist on the machine
   resources { 'firewall': purge = true }

   ## include the pre and post modules
   include firewall::pre
   include firewall::post
 }

 Then you just include firewall  


 Shawn Foley
 425.281.0182


 On Tue, Dec 4, 2012 at 12:36 PM, Louis Coilliot
 louis.coill...@think.fr mailto:louis.coill...@think.fr wrote:

 Hello,

 I can't figure out how I can use the module puppetlabs-firewall only
 for some targeted nodes.

 If I put :

 resources { firewall: purge = true }

 in top scope (i.e. site.pp),

 then all the firewall rules on all my nodes are purged. Even for nodes
 for which I don't apply any module containing specific firewall { ...
 } resources.

 If I put it in a module (i.e. myfw ),  then for all nodes where I
 apply a module containing firewall resources, I got a mix of the
 previous rules (defined locally with the OS) and the new ones provided
 with puppet.

 Did I miss something or is it the expected behaviour ?

 If this is expected, is there a workaround to apply the purge of the
 rules only for some nodes where I want to apply specific firewall
 rules through modules and puppet-firewall ?

 Thanks in advance.

 Louis Coilliot

 --
 You received this message because you are subscribed to the Google
 Groups Puppet Users group.
 To post to this group, send email to puppet-users@googlegroups.com
 mailto:puppet-users@googlegroups.com.
 To unsubscribe from this group, send email to
 puppet-users+unsubscr...@googlegroups.com
 

Re: [Puppet Users] How to group hosts?

2012-12-08 Thread Jakov Sosic

On 12/06/2012 06:44 PM, Stefan Goethals wrote:

Hi,

You could use facter-dot-d to set a fact on those servers specifying the
name of a storagegroup or a webcluster
and then use that fact in your hiera hierarchy

so you could have
/etc/facter/facts.d/servertype.yaml with content



Thank you, I didn't know about that.

I will write a class that serves /etc/puppet/private/%H/facts.yaml into 
that dir, or if that file does not exist it will serve empty yaml from 
modules/foo/files/facts.yaml


I think I will have to move to ENC very soon because this solution 
somehow doesn't feel good :-/




--
Jakov Sosic
www.srce.unizg.hr

--
You received this message because you are subscribed to the Google Groups Puppet 
Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Configuration for modules

2012-12-08 Thread Nathan Ward
I'm writing a module (with new types and new providers) to configure Splunk 
(either main servers or forwarders) that uses the Splunk REST API.

This API needs some authentication information, and I'd like that to be 
configurable per-host (ie per host that puppet agent runs on).

Is there a standard way to do this? Should I drop a new file in to 
/etc/puppet/splunk.conf (or similar) with the config parameters I want to 
use, or even better, is there a nice well defined way to do this through a 
manifest?

I've thought about setting a global parameter, I have two classes; 
'splunk-monitor' and 'splunk-user'. Planning to make these both inherit 
from a common class 'Puppet::Provider::Splunk' with some common utilities 
like REST HTTP things - I copied this approach from the package providers.

If I were to set global parameters, would I need to define it like so:
Splunk-monitor {
  parameter = blah
}
Splunk-user {
  the_same_parameter_again = blah
}

.. or is there a way to define global configuration for the parent that 
applies to both:
Splunk {
  common_parameter = blah
}

Not sure how this would work when considering enumerating all the existing 
instances of configuration that this module adds - there can be many splunk 
monitor instances configured, so I'd want to enumerate them first, much 
like how the package providers enumerate installed packages first.

I've tried to think about modules that need authentication information to 
analyse for some best practices, but haven't come across any.

Any pointers in the right direction would be greatly appreciated.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/0_B3uZ07G9AJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.