Re: [Puppet Users] Service Resources and Selinux

2012-10-10 Thread Sean Millichamp
Tom,

It seems like having that as a parameter in the service type might be a
good idea worthy of at least some further discussion. Want to open a
feature request in Redmine to track it? I might (eventually) take a stab
at adding support for it.

Sean

On Wed, 2012-10-10 at 09:01 +0100, Tom wrote:
 Well, I've decided on a very simple way of doing this,
 
# Keep it running
service { mysqld:
  ensure = running,
  start  = runcon -u system_u /etc/init.d/mysqld start,
  hasrestart = false,
  require= [ Package[mysql-server], File[$mysqldirs], ],
}
 
 so, it starts under the correct selinux user context, and then using 
 restart on the init script is disabled so that it makes use of the start 
 command when doing a restart.
 
 Not sure if this would be something that would make a good resource flag?
 
 Many thanks.  Tom.
 
 
 
 On 10/10/12 07:55, Tom wrote:
  Hi,
 
  Thanks for the response.  Really, I think the way I'm approaching this 
  is thinking about starting mysqld under the right selinux user context 
  so that it doesn't label its own files incorrectly.  Every time a 
  database or table is created, MySQL will be creating it under the 
  wrong user context, and selinux will then go and reset it back.
 
  I think maybe a wrapper script using runcon which invokes the mysqld 
  service under the correct context is going to be the way to go.  
  Really though, I'd hoped that puppet had some kind of provision for 
  starting services with the correct user context!
 
  Just wondering if anyone else has had the same issue in the past, or 
  do they just ignore all those seluser notifications? :-)
 
  Many thanks.  Tom.
 
 
 
  On 10/10/12 01:50, Peter Brown wrote:
  You need to add a require to the service for the config files you are 
  managing.
  I find the best way to do that is put all the config files in a config
  subclass and then require that in in the service.
 
 
  On 10 October 2012 01:02, Tomt...@t0mb.net  wrote:
  Hi list,
 
  I've got an issue at the moment, which isn't really a big problem, 
  but an
  untidy annoyance really, and I'd just like to understand what the best
  practice might be when dealing with the issue.
 
  As a really quick summary, the issue is that Puppet is starting up the
  mysqld service for the first time as unconfined_u, and then when 
  MySQL goes
  and creates a load of its initial files also as unconfined_u, Puppet 
  goes
  and resets them all to system_u which is what they should be when 
  checking
  matchpathcon:
 
  The thing is, because the service is started as unconfined_u, any
  databases/tables that are created are going to inherit that, and 
  puppet is
  going to be resetting them.
 
  For some more detail, I've written something which will set the 
  mysqld_db_t
  selinux file_context on my data directories which are in /home, and 
  I have a
  notify which will go and check and re-set the selinux file_context 
  if there
  are any changes in these directories.  They're set to recurse, so to 
  stop
  Puppet changing things from unconfined_u to system_u on a regular 
  basis, and
  sending refresh notices to my Exec resources, I've set
  selinux_ignore_defaults to true in my File resources.
 
  This strikes me as a bit of a dirty way of doing things, and I was 
  wondering
  if anyone had any better ideas of how to manage this.
 
  Please find below a sample of the relevant code - because I'm sure my
  verbose description is probably leaving some people scratching their 
  heads!
  :)  I was going to make the file_context stuff much more re-usable, 
  but want
  to get my head around the best practices first - as I'm not that 
  experiened
  with all of this stuff to be honest!
 
  Many thanks.  Tom.
 
 
 # List of directories we're going to use with MySQL
 $mysqldirs = [ /home/data, /home/logs, /home/mysqltmp, ]
 
 # Set SELinux contexts
 define add_selinux_context ($context = mysqld_db_t) {
   file { $name:
 ensure  =  directory,
 owner   =  mysql,
 group   =  mysql,
 seltype =  mysqld_db_t,
 selinux_ignore_defaults =  true,
 recurse =  true,
 require =  Package[mysql-server],
 notify  =  [ Exec[add_file_context_${context}_${name}],
  Exec[set_file_context_${context}_${name}], ],
   }
 
   # Set the default file_context regex for the path
   exec { add_file_context_${context}_${name}:
 command =  semanage fcontext -a -t ${context} 
  \${name}(/.*)?\,
 unless  =  semanage fcontext -l | grep 
  '^${name}(/.*)?:${context}:',
 require =  [ Package[policycoreutils-python], File[$name], ],
 refreshonly =  true,
   }
 
   # Reset the file_context using restorecon
   exec { set_file_context_${context}_${name}:
 command =  restorecon -R ${name},
 unless  =  ls -d --scontext ${name} | awk -F: '{print \$3}' 
  | grep
  \${context}\,
 require =  File[$name],
   

Re: [Puppet Users] Re: Announce: PuppetDB 0.9.0 (first release) is available

2012-05-23 Thread Sean Millichamp
On Wed, 2012-05-23 at 06:24 -0700, jcbollinger wrote:

 That understanding of storeconfigs looks right, but I think the
 criticism is misplaced.  It is not Deepak's line of thinking that is
 dangerous, but rather the posited strategy of purging (un)collected
 resources.  Indeed, I rate resource purging as a bit dangerous *any*
 way you do it.  Moreover, the consequences of a storeconfig DB blowing
 up are roughly the same regardless of the DBMS managing it or the
 middleware between it an the Puppetmaster.  I don't see how the
 existence of that scenario makes PuppetDB any better or worse.

Indeed, it *is* dangerous, but so are many things we do as system
administrators. The key is in gauging the risk and then choosing the
right path accordingly.  In my environment I am not always able to know
the complete history of resources as changes may come from unexpected
places. It is less than ideal, but it is one aspect of my reality. In
that situation, the selective use of purging becomes quite key in
keeping things that need to be cleaned up cleaned up.

I don't put anything in exported resources with purging that would be
capable of bringing down a production application, thankfully, but there
is quite a bit that could quite possibly cause a variety of headaches,
alerts, and tickets on a massive scale for a while during the
reconvergence.

In additioanl, we are in a transition to PE and the Compliance tool will
allow me another way of handling that in a more manual admin-review
approach (to catch resources that get added outside of Puppet's
knowledge).

What I really need is some tool by which I can mark exported resources
as absent instead of purging them from the database when they are no
longer needed (such as deleting a host).  That would eliminate most, if
not all, of the intersections of purging and exported resources that I
have.  Right now I use a Ruby script I found quite a while back to
delete removed nodes and all of their data.  I'm sure there is a way to
mark the resources as ensure = absent instead, but I've not gone
digging into the DB structure.

 If you cannot afford to wait out a repopulation of some resource, then
 you probably should not risk purging its resource type.  If you do not
 purge, then a storeconfig implosion just leaves your resources
 unmanaged.  If you choose to purge anyway then you need to understand
 that you thereby assume some risk in exchange for convenience;
 mitigating that risk probably requires additional effort elsewhere
 (e.g. DB replication and failover, backup data center, ...).

Indeed, as I said above, it is about risk management. Deepak's statement
I had responded to wasn't the first time I had read the oh, just wait
for it to repopulate statement and I wanted to be certain that wasn't
actually something that was considered in the design with regards to
updates, etc. on the stability of the storeconfigs data.

At some point you have to trust tools that have earned that trust
(either via testing or real world use or both) to do the job that they
say they are going to do. Puppet has years of earning that trust with
me. Could something corrupt and destroy the database and cause me a lot
of trouble? Sure, but that could be said of many tools. That's why we
have backups, DR systems, etc. even though the in the now when it
fails can be painful as heck. However, as long as Puppet Labs is
designing it to be dependable and upgrade-safe (which it sounds like
they are) then I'll continue to trust it (with prudent testing, of
course) because they've earned it.

Sean


-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Announce: PuppetDB 0.9.0 (first release) is available

2012-05-22 Thread Sean Millichamp
On Mon, 2012-05-21 at 15:39 -0600, Deepak Giridharagopal wrote:


 1) The data stored in PuppetDB is entirely driven by puppetmasters
 compiling catalogs for agents. If your entire database exploded and
 lost all data, everything will be 100% repopulated within around
 $runinterval minutes.

I think that this is a somewhat dangerous line of thinking.  Please
correct me if my understanding of storedconfigs are wrong, but if I am
managing a resource with resources { 'type': purge = true } (or a
purged directory populated file resources) and any subset of those
resources are exported resources then, if my entire database exploded,
would I not have Puppet purging resources that haven't repopulated
during this repopulation time?  They would obviously be replaced, but if
those were critical resources (think exported Nagios configs, /etc/hosts
entries, or the like) then this could be a really big problem.

To me storedconfigs are one of the killer features in Puppet. We are
using them for a handful of critical things and I plan to only expand
their use. I'm glad that Puppet Labs is focusing some attention on them,
but this attitude of we can wait out a repopulation has me worried.
Again, maybe I'm misunderstanding how purging with exported resources
actually works, but my experience has been that if you clear the
exported resource from the database so goes the exported record in a
purge situation.

In a slightly different vein, does PuppetDB support a cluster or HA
configuration? I assume at least active/passive must be okay. Any
gotchas to watch for?

Thanks,
Sean

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Puppet Sites. Your thoughts?

2012-05-18 Thread Sean Millichamp
On Fri, 2012-05-11 at 09:39 -0700, Daniel Sauble wrote:
 Another problem is that if you move services around, you have to
 update puppet.conf on all nodes that use that service. For example, if
 you migrate your master to a new host, you have to update puppet.conf
 on every agent that uses that master. What Puppet Sites provides is a
 service registry that allows you to store this information in a
 central location. Your agents retrieve service connection information
 from the service registry. So, if your master switches to a different
 host, all you need do is update the host in the service registry, and
 all your agents will pick up that change automatically.

Daniel,

Sorry for chiming in late, but I'm just catching up on this discussion.
I didn't see explicit mention of it one way or the other, but I would
hope that whatever mechanism you are using for the service registry will
support some type of inheritance mechanism for assigning the
configuration settings at fairly arbitrary levels/grouping and not just
globally with per-host overrides.

At $WORK we are a multi-tenant environment and differing customer needs
mean that there is a potential for potentially significant Puppet
configuration variances from environment to environment.  For instance,
one customer may have their own Puppetmaster environment for
catalogs/files, but share the common CA, while most other customers use
a shared set of Puppetmasters. We have created a $customer variable
within Puppet (available with every host) that we use with Hiera to
select out any per-customer settings. We aren't currently but may even
select Puppetmasters based on datacenter (so, $customer and $datacenter
as either/or selectors with a likely global default).  Having to manage
customer-wide variances per-host would quickly get pretty unmanageable.
Right now our puppet.conf files are generated via templates (with data
pulled from Hiera) and deployed by Puppet to take into account any
variances. I like the Sites concept, but it would have to account for a
similarly high degree of flexibility to be something we'd be able to
use.

Thanks,
Sean


-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.