Re: [Puppet Users] Re: Using exported resources as data containers?

2014-09-10 Thread Daniel Siechniewicz
Hi,

This particular hiera backend, from what I understand, is extracting
values from puppetdb "on the fly", so you don't have to put any values
in hiera yaml/json files save for a relevant puppetdb query. This
would centralize data gathering and localize it to a "data component".
Otherwise you end up with puppetdb queries within your manifests which
could be not what you want. Then again, this really depends on your
preferences. Good luck and tell us if this worked for you.


Regards,
Daniel


On 10 September 2014 13:03, Matthew Pounsett  wrote:
>
>
> On 10 September 2014 07:30, Daniel Siechniewicz 
> wrote:
>>
>> Hi,
>>
>> Sounds like a job for https://github.com/dalen/puppet-puppetdbquery
>> potentially? pdbresourcequery or maybe even the hiera backend.
>
>
> Hiera doesn't apply here, because it's data gathered from the servers
> (mostly from facts) where the config snippets are defined.  Putting that
> sort of thing in Hiera violates Don't Repeat Yourself.  I'll have a look at
> the others though.. I'm not familiar with them.  Thanks for the suggestions.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Puppet Users" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/puppet-users/VzW7ZODB7hQ/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/CAKZ22fLmwD0eqXs7DttKydpZjPBTJMJq0PcQY28vpnbO%3D7mxGg%40mail.gmail.com.
>
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CA%2BSnNS-XdeWFXeJy75LAnn5Smb9u1R0vvBSJz_ZUB3Ta-paOkQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: Using exported resources as data containers?

2014-09-10 Thread Daniel Siechniewicz
Hi,

Sounds like a job for https://github.com/dalen/puppet-puppetdbquery 
potentially? pdbresourcequery or maybe even the hiera backend.


Regards,
Daniel


On Tuesday, September 9, 2014 11:49:21 PM UTC+1, mpou...@afilias.info wrote:
>
> I have a difficult-to-manage application which does not implement a conf.d 
> or include syntax in its configuration, but requires a bunch of config 
> snippets that contain information only on groups of other servers.   I've 
> been dealing with this by generating the config snippets from templates on 
> some servers as exported resources, realizing them on the central server, 
> and then executing an external script to "compile" these snippets into the 
> final config.
>
> This has a couple of drawbacks.  First, it requires puppet to stat nearly 
> 15,000 little tiny config snippets every run that are not actually used 
> directly, and shouldn't need to exist.  Second, the final config file, 
> because it's compiled by an external script, isn't under the control of 
> puppet, so it has no idea if that file gets modified by something outside.. 
> so it can't know to update it. 
>
> I've been mulling over a better way to manage this config file, and I 
> think I've hit on an idea, but I have no idea if it will actually work, or 
> what the syntax would look like if it could.
>
> I'm thinking of replacing the @@file resources on the remote servers with 
> a defined type .. say .. @@data_container.   Then, on the server where the 
> data is needed I could use a collector to iterate over the exported 
> resources reading data from them to use in the single template for the 
> final config file.
>
> Where the data is defined:
> @@data_container { 'mydata':
>someparameter => 'foo'
> }
>
> And then in the template on the other host, somehow get a collection of 
> those resources into an array, and make use of their parameters as 
> variables to be referenced in the template.. 
>
> <%- collection.each do |data| -%>
> <%= data.someparameter %>
> <%- end -%>
>
> Would this work at all?  Is there syntax to support something like this?
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/59ba639d-441a-41f9-bcf4-93ac45fe209f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Resource chaining not working

2013-07-11 Thread Daniel Siechniewicz
Hi,

I have a seemingly simple situation, it can't really get simpler than that 
when it comes to resource chaining:

node 'redis' {
  class { 'os::repo::misc': }
  class { 'redis': }
  Class['os::repo::misc'] -> Class['redis']
}

This doesn't work:

Info: Applying configuration version '1373529981'
Error: Could not find package redis
Error: /Stage[main]/Redis::Package/Package[redis]/ensure: change from 
absent to present failed: Could not find package redis
Notice: /Stage[main]/Redis::Config/File[redis_config]: Dependency 
Package[redis] has failures: true
Warning: /Stage[main]/Redis::Config/File[redis_config]: Skipping because of 
failed dependencies
Notice: /Stage[main]/Redis::Config/File[/apps/redis]: Dependency 
Package[redis] has failures: true
Warning: /Stage[main]/Redis::Config/File[/apps/redis]: Skipping because of 
failed dependencies
Notice: /Stage[main]/Redis::Service/Service[redis]: Dependency 
Package[redis] has failures: true
Warning: /Stage[main]/Redis::Service/Service[redis]: Skipping because of 
failed dependencies
Notice: /Stage[main]/Os::Repo::Misc/File[sp-misc.mirrors.list]/ensure: 
defined content as '{md5}09676e55c31e92aa2090199f5b06423a'
Info: create new repo misc in file /etc/yum.repos.d/misc.repo
Notice: /Stage[main]/Os::Repo::Misc/Yumrepo[misc]/mirrorlist: mirrorlist 
changed '' to 'file:///etc/yum.repos.d/sp-misc.mirrors.list'
Notice: /Stage[main]/Os::Repo::Misc/Yumrepo[misc]/enabled: enabled changed 
'' to '1'
Notice: /Stage[main]/Os::Repo::Misc/Yumrepo[misc]/gpgcheck: gpgcheck 
changed '' to '0'
Info: changing mode of /etc/yum.repos.d/misc.repo from 600 to 644
Notice: Finished catalog run in 3.07 seconds


Of course, since the repo is deployed, it will work on second puppet run.

There are no internal dependencies between redis and os::repo::misc. I've 
had the same problem recently with other repositories.

This: Yumrepo['misc'] -> Class['redis'] also doesn't work.

Here's the os::repo::misc

class os::repo::misc {
  file { 'sp-misc.mirrors.list':
ensure => present,
source => 'puppet:///modules/os/repo/sp-misc.mirrors.list',
mode   => 0644,
path   => '/etc/yum.repos.d/sp-misc.mirrors.list',
  }

  yumrepo { 'misc':
name   => 'misc',
mirrorlist => 'file:///etc/yum.repos.d/sp-misc.mirrors.list',
enabled=> 1,
gpgcheck   => 0,
require=> File['sp-misc.mirrors.list'],
  }
}

Redis module is more complicated (I cloned this: 
https://github.com/electrical/puppet-redis), but I've had the same problem 
with other modules, completely unrelated to one another, so it doesn't seem 
like there's any pattern there, other than repo + something just doesn't 
work.

Any thoughts? Could this be a bug, or am I doing something wrong?


Regards,
Daniel

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.




[Puppet Users] Mcollective + ActiveMQ 5.8.0 - direct addressing problems

2013-05-30 Thread Daniel Siechniewicz
Hi,

I've decided to try the new shiny and installed ActiveMQ 5.8.0 (my own rpm 
for Centos 5). It seems that Mcollective (2.2.3 in this case) doesn't play 
nice with ActiveMQ 5.8.0. It causes problems with direct addressing. In 
fact, it mostly stops working, but occasionally, rarely, does "go through". 
If I disable direct addressing or downgrade activemq, it all springs back 
to life. "Indirect" addressing is still OK, regardless of the version. I 
haven't gone as far as updating stomp gem, and I don't see anything in 
mcollective 2.2.4 to believe it would help if I upgraded. 

I will need to rebuild those servers soon, but while I still have logs 
(mcollective and activemq debug logs), does anyone want those, or some 
other info?


Regards,
Daniel

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [Puppet Users] Re: Terrible exported resources performance

2013-01-22 Thread Daniel Siechniewicz
On Tue, Jan 22, 2013 at 3:04 PM, Ken Barber  wrote:
>> This sounds like a sensible workaround, I will definitely have a look. I
>> haven't yet had enough time to look at the issue properly, but it seems that
>> this very long time is indeed consumed by catalog construction. Puppetdb
>> fails after this is finished, so it seems that it dies when nagios host
>> tries to report its catalog back.
>
> Do you mean it dies from an OOM when it tries to report the catalogue back?

Yes, that's what it looks like. Of course I can prevent it by giving
it more memory (which I did), but I already have postgres backed
puppetdb and had to give puppetdb 3GB, or a puppet agent run on a
single host (OK, with thousands of exported resources to collect and
process) that takes about 70 minutes can still kill it. This waiting
70 minutes for it to die is an insult to injury... Overall not great.
I'm happy to redo this setup if I'm doing something wrong, but it just
seems like this is exponential (30-odd nodes - 2 minutes, 100-odd
nodes, 70 minutes).

Regards,
Daniel

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.