[Puppet Users] Bolt Developer Update (2019/09/27) — The Road to Bolt 2.0

2019-09-27 Thread Nick Lewis
The past few weeks have had some large features in flight and we're finally
seeing them land, so it's time for another update.

We released Bolt 1.31 and the big news is that our so-called plugin system
is now *actually* pluggable!

You can now bundle and ship inventory plugins as ordinary Bolt tasks in
modules. Currently that includes looking up targets and config (
resolve_reference), and secret encryption/decryption.

To write a module with a plugin, you need to do two things:
1) Add a bolt_plugin.json at the root of the module to tell Bolt that it
has plugins. For now, this can just contain {}.
2) Write a task with the same name as the plugin "hook" you want to
implement (this can be overridden in bolt_plugin.json later) that returns
an object with a value key.

For example, to turn the mymodule module into a plugin that can retrieve
targets, just add bolt_plugin.json and write a task called
mymodule::resolve_reference.

mymodule can then be referenced from the inventory. When Bolt runs, it will
run the task with whatever parameters you set and will substitute the
result in the inventory.

groups:
  - name: mynodes
targets:
  - _plugin: mymodule
user: nick
application: web

For a real world example, check out the puppetlabs-azure_inventory module
.

*Bolt 2, Inventory 2*

We also wanted to take some time to share some of our plans for the
upcoming Bolt 2.0 release.

The marquee feature of Bolt 2.0 is already coming into existence in Bolt
1.x. That's the new v2 inventory format and Target API.

The biggest change in the v2 inventory is how targets are defined and
managed. In inventory v1, a target always had a "name" field which was
parsed as a URI to determine connection information. That mixing of
identity with data caused trouble if you wanted to later change the
connection information for the target in a plan. For instance, if you
wanted to use a different transport.

In v2, a target separately has a "name" as well as connection information.
You can set both a URI as well as individual connection fields like host
and port. This makes it easier to dynamically modify and create new Targets
within a plan, which is helpful for plans that provision new nodes.

A related improvement is that arguments to parameters of type TargetSpec will
automatically be added to the inventory before the plan is run. For
example, the following plan:

plan test(TargetSpec $nodes) {
  return get_targets('all')
}

If you were to run this plan with an empty inventory.yaml file, for
instance with bolt plan run test --nodes foo,bar,baz, it would have
returned nothing, because the inventory was empty. With the v2 inventory,
it will return ["foo", "bar", "baz"], because those targets are added to
the inventory automatically.

Inventory v2 is also the only version which supports the plugin
functionality mentioned above. Inventory v2 is available for you to try out
in Bolt today and will be the only format in Bolt 2.0. Check out the docs
 to see how to
migrate your inventory.

We'll be back with more updates as Bolt 2.0 draws nearer. In the meantime,
you can follow the Bolt 2.0 milestone
 to see what's happening.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CANa5_qJWEk5HV76M1NfGj82mnufLwQbdijy%3Db4PoS%2BY4fPMTBg%40mail.gmail.com.


[Puppet Users] Bolt Developer Update (2019/09/05) — Ch-ch-ch-ch-changelog!

2019-09-06 Thread Nick Lewis
It's been two weeks since the last Bolt update and what a two weeks it's
been.

In that time, we've completed our move from Jira to GitHub for tracking
Bolt issues and work-in-progress. If you want to follow what we're working
on, you can now simply consult our project board
.

We also just released Bolt 1.30 (*thirty*!). You can check out the full
list of changes in our shiny new CHANGELOG.md file
.

This release has some helpful improvements to output and other fixes, but
the highlight is definitely the new pluggable puppet_library hook. This
satisfies an extremely popular request to be able to customize how Bolt
installs Puppet on systems when you call apply_prep().

By default, Bolt will use the puppet_agent::install task to install the
very latest version of Puppet. That's great if all you care about is
getting Puppet on the system so you can use apply(). But if you're using
Bolt alongside an established Puppet deployment, you probably care a bit
more about which version of Puppet gets installed and where it's downloaded
from. You may also want to do some additional configuration. With the new
puppet_library hook, you can do all that and more!

For instance, you can use the puppetlabs-bootstrap module to install a
Puppet Enterprise agent, connected to a specific puppet master.

# bolt.yaml
plugin_hooks:
  puppet_library:
plugin: task
task: bootstrap
parameters:
  master: puppet.example.com
  cacert_content: 

This task will download the puppet-agent package from the master, install
the agent, and configure it to connect to that master. It will even
validate the master's certificate using the given CA certificate ensuring
mutual trust.

We'll be improving this feature even more in the future, adding the ability
to control the user it runs as
 and letting the built-in
puppet_agent::install task control whether the service starts
.

We've got a slew of other exciting features that will be landing soon,
including a generic task-powered plugin system, a rework of the Target API,
and a plugin to generate inventory from Azure VMs.

That's all for now! Tune in next week for an update about our plans for
Bolt 2.0.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CANa5_qKU4tUAKVHvANtY95k27CR4tt_P3qUr9yT-d%3D2uPSTraA%40mail.gmail.com.


[Puppet Users] Bolt Developer Update (2019/08/22) — Keep on pluggin'

2019-08-22 Thread Nick Lewis
Hello from Bolt developer land!

We currently have a slew of large plugin-related features in flight, which
means this week's release has just a couple of bug fixes and improvements.

*Coming soon*

We're reworking the API for interacting with Target objects
 from within a plan to make
it easier to dynamically create Targets and to change their config during a
plan run.

One of our most requested features, the ability to control how `apply_prep`
installs Puppet on targets ,
is in progress now.

Also on the plugin front, we have both an Azure inventory integration
 in the works as well as
the ability to add new plugins via Puppet modules
.

Look for those features to be landing in releases over the next couple of
weeks.

Happy Bolting,
The Bolt Team

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CANa5_qJZHGJFYsuRaM6sz-vWHdp8fvW0yecj6V_sGTv2oLozLw%40mail.gmail.com.


[Puppet Users] Bolt Developer Update (2019/08/16) — Welcome to GitHub

2019-08-16 Thread Nick Lewis
Welcome back! The reception to the last developer update was so positive
(ie. nobody complained) that we're back with another one.

The big news this week is Bolt 1.29.0, following hotly on the footsteps of
last week's 1.28.0 release.

This release is a little leaner than last week, but continues with the
HashiCorp improvements, adding remote state support to the Terraform
inventory plugin.

Speaking of plugins, we're continuing to evolve the plugin system. Check
out the proposals to clean up "lookup" plugin evaluation
 and load plugins from
modules  and let us know
what you think.

As you may see from those links, we're in the midst of migrating Bolt's
project tracking from Jira to GitHub. If you want to see what we're
scheming up or have an issue to file, GitHub
 is the place to do it.

Till next week,
The Bolt Team

PS. If you missed last week's update, check it out


-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CANa5_q%2BQEt9OtqE%2BJXUAUSvP5MVu94e-nYY%3DaBzWMPu2knmBPg%40mail.gmail.com.


[Puppet Users] Bolt Developer Update (2019/08/09) — Vault, the Target API, and apply_prep

2019-08-09 Thread Nick Lewis
Welcome to the *first ever* Bolt developer update! This is a new experiment
we're trying to communicate more directly about what we're actively working
on and what we're thinking about in the near term.

This week, we released Bolt 1.28.0. The feature I want to highlight is the
new Vault plugin. This plugin allows you to query inventory information
(such as passwords) from HashiCorp Vault .

As an example, this inventory snippet retrieves a private key from Vault
and uses it to connect to host.example.com.

targets:
  - host.example.com
config:
  ssh:
user: root
private-key:
  key-data:
_plugin: vault
server_url: http://127.0.0.1:8200
auth:
  method: userpass
  user: bolt
  pass: bolt
path: secrets/bolt
field: private-key
version: 2

Try out the plugin and let us know what you think.

*Up next*

Coming down the pipe, Alex posted a specification
 for some refinements to
the Target/inventory API. Our aim with that is to standardize the
operations you can use within a plan to dynamically create, modify, and
regroup targets.

We've also started discussing extensions to the apply_prep() function to
make it possible to use custom agent install methods and alternate fact
sources. Please check out the current state of that proposal
 and give feedback.

Speaking of feedback, please respond on this thread or in #bolt on Slack to
let us know what you think of this newsletter format. You can also view
this post on the web
.

Thanks!
The Bolt Team

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CANa5_qJ95Fqtpf6_h%3DB12Kgr7PNUuprqavrKbft8iddkq3qG_w%40mail.gmail.com.


Re: [Puppet Users] Custom type and provider

2018-09-21 Thread Nick Lewis
On Fri, Sep 21, 2018 at 1:34 PM Rafael Tomelin 
wrote:

> Hi guys,
>
> I am creating a type in custom provider, but I am not able to understand
> the following question.
>
> I created a basic type to understand the concept of things, as follows:
> Puppet::Type.newtype(:mydir) do
> @doc = "First custom type."
>
> ensurable do
> defaultvalues
> defaultto :present
> end
>
> newparam(:name, :namevar => true) do
> end
>
> newparam(:install_location) do
> end
> end
>
>
> After creating the provider, but does not recognize the same and displays
> the following error:
> Error: Could not find a suitable provider for mydir
> Notice: Applied catalog in 0.63 seconds
>
> Puppet::Type.type(:mydir).provide(:linux) do
>
> defaultfor :operatingsystem => :linux
> confine:operatingsystem => :linux
>
> commands :mkdir => "mkdir"
> commands :rm => "rm"
>
> def exists?
> Puppet.info("checking if is already deployed")
> deploy_dir = "#{resource[:install_location]}/#{resource[:name]}"
>
> File.directory?(deploy_dir) and Dir.entries(deploy_dir).size > 2
> end
>
> def create
> Puppet.info("Deploying the appliction")
> deploy_dir = "#{resource[:install_location]}/#{resource[:name]}"
>
> mkdir('-p', deploy_dir)
> end
>
> def destroy
> Puppet.info("Removing the appliction")
> deploy_dir = "#{resource[:install_location]}/#{resource[:name]}"
> rm('-rf',deploy_dir)
> end
> end
>
>   What I did not envisage is how Puppet identifies the provider to be used
> in the OS?
>

It looks like the trouble here is that "linux" isn't a valid value for
operatingsystem. If you look at some examples in puppet, you can see that
common values of operatingsystem are like "fedora", "centos", "suse". The
fact you want to use here is osfamily, which *is* "linux".


> --
>
> Atenciosamente,
>
> Rafael Tomelin
>
> skype: rafael.tomelin
>
> E-mail: rafael.tome...@gmail.com
>
> RHCE  - Red Hat Certified Engineer
> PPT-205 - Puppet Certified Professional 2017
> Zabbix- ZABBIX Certified Specialist
> LPI3
> ITIL v3
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/CAGEUqbCtgp2bo7CN8STqDxq5L4WZLCJXNT%2Buq7-qzpvVYaL3%2Bg%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CANa5_qJXKBnJQKH-6zW7mr3Osk%2BrwLPvMXg5zX_CwbN3fP9BAw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] unexpected behavior in Bolt 0.18.1

2018-04-04 Thread Nick Lewis
On Mon, Apr 2, 2018 at 8:03 AM Ty Young  wrote:

> Hey fellow Puppet users,
>
> I'm a Puppet newb, but I did look through postings in this group and in
> other locations for the answer before posting this question.  Please
> forgive me if I'm asking a FAQ.
>
> When I execute a Bolt 'run command' task, my connection succeeds using SSH
> keys but job terminates with a status code 1 and reminds me of the proper
> syntax for scp (secure copy) commands.
>
> root@zztypuppet01:tty1@23:25:08:~/.puppetlabs # *bolt command run
> '/bin/echo "hello world"' --nodes remotehost --user root --verbose*
> Started on remotehost...
> 2018-04-01T23:25:13.182788 INFO   remotehost: Command failed with exit
> code 1
> Failed on remotehost:
>   The command failed with exit code 1
>   STDERR:
> usage: scp [-1246BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file
> ]
>[-l limit] [-o ssh_option] [-P port] [-S program]
>[[user@]host1:]file1 ... [[user@]host2:]file2
> Failed on 1 node: remotehost
> Ran on 1 node in 0.18 seconds
> root@zztypuppet01:tty1@23:25:13:~/.puppetlabs #
>
> Logging on the remote host for SSH indicates a successful connection:
>
> Apr  1 23:28:19 remotehost sshd[29147]: Accepted publickey for root from
> 10.204.40.10 port 37122 ssh2
> Apr  1 23:28:19 remotehost sshd[29147]: pam_unix(sshd:session): session
> opened for user root by (uid=0)
> Apr  1 23:28:19 remotehost sshd[29147]: pam_unix(sshd:session): session
> closed for user root
>
> According to the documentation for Bolt
> , I'm
> formatting the command successfully.
>
> Am I missing something here?
> Thanks
> ty
>

That's really a mysterious error. Your usage of bolt looks correct, and
there shouldn't be any files uploaded when just running a command
regardless. Can you try the command with --debug and provide that output?
Also, do you have any config in ~/.puppetlabs/bolt.yaml?

Nick

> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/3fac9078-7ead-47dc-90d6-96ab1b45ff4c%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CANa5_qKcMRbk-4eLfeA6w29SKncsdfJiqt-YxBXH2ksgZ0zErQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Need help with puppetdb query from manifest using puppetdb_query (PQL)

2018-02-15 Thread Nick Lewis
On Thu, Feb 15, 2018 at 2:11 PM John Bishop  wrote:

> Hello,
>
>I'm new to using PQL and i'm having a bit of difficulty.   I'm trying
> to return the ipaddress of any node where the value of three trusted facts
> (pp_application, pp_role and pp_environment) meets some criteria.
>
> I have a query which will return only the nodes that i care about,  but
> I'm having a problem structuring the query to also return the top level
> ipaddress fact with the results.  Any help would be appreciated.  Thank
> you.
>
> $test_query = '["from", "facts",
>  ["and",
>["subquery", "fact_contents",
>["and",
>  ["~>", "path", ["trusted", "extensions",
> "pp_application"]],
>  ["=", "value", "someapp"]]],
>["subquery", "fact_contents",
>["and",
>  ["~>", "path", ["trusted", "extensions", "pp_role"]],
>  ["=", "value", "appserver"]]],
>["subquery", "fact_contents",
>["and",
>  ["~>", "path", ["trusted", "extensions",
> "pp_environment"]],
>  ["=", "value", "development"]]'
>
>
In PuppetDB, a "fact" is an entry with [certname, environment, name,
value]. A subquery between "facts" and "fact_contents" means "find facts
whose value matches this fact_contents query". In this case, that will
return the "trusted" fact. You then want to lookup the corresponding
"ipaddress" fact for matching nodes. Since you're really looking up the
value of one fact using a query based on another fact, you want to use your
existing query as a fact subquery.

["from", "facts",
  ["and",
["=", "name", "ipaddress"],
["subquery", "facts",
  ["and",
["subquery", "fact_contents",
  ["and",
["~>", "path", ["trusted", "extensions", "pp_application"]],
["=", "value", "someapp"]]],
["subquery", "fact_contents",
  ["and",
["~>", "path", ["trusted", "extensions", "pp_role"]],
["=", "value", "appserver"]]],
["subquery", "fact_contents",
  ["and",
["~>", "path", ["trusted", "extensions", "pp_environment"]],
["=", "value", "development"]]]


The facts subquery will restrict the outer facts query to return facts only
for nodes that match the subquery.

However, there's a more straightforward way to achieve this using PQL
 rather than
the AST query language
 you're using.

facts[certname, value] {
  name = "ipaddress" and
  certname in inventory[certname] {
trusted.extensions.pp_application = 'someapp' and
trusted.extensions.pp_role = 'appserver' and
trusted.extensions.pp_environment = 'development'
  }
}


This query uses the inventory entity to find nodes with the three specific
trusted extensions, and then looks up the ipaddress fact for each of those
nodes and returns the node name and the value of the ipaddress fact.

I don't have an environment available with those particular trusted
extensions in use, so I can't verify it's 100% correct, but it should at
least be on the right track.

$test_results = puppetdb_query($test_query)
>
> Notify { '*** query results ***\r':
>   message => "data: ${test_results}",
> }
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/89e396e9-e4f0-44b1-bc71-efaf99829556%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CANa5_qKnL22Wpwfdd6%3Dt%3Da%2BhzTHb8P_40fVnFLkfoUb%3DcOz_Yg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet Users] Re: Multiple Puppet masters each running as their own CA connecting to a single PuppetDB instance

2013-07-16 Thread Nick Lewis
On Tuesday, July 16, 2013 1:25:22 PM UTC-7, replicant wrote:

 So, 

 We are working on migrating a global deployment of Puppet over to a 
 single PuppetDB instance away from a single MySQL storeconfigs 
 instance and are running into an issue. It seems is that PuppetDB will 
 only allow nodes from a single Puppet master to connect if each Puppet 
 master is running as it's own CA, is this statement correct? 

 Is it possible to have multiple Puppet masters, each running as their 
 own CA, talk to a single PuppetDB instance? 


By having multiple CAs, you're effectively establishing separate networks, 
so it doesn't seem to make much sense to comingle their data. PuppetDB 
itself has no notion that the data ought to be kept separate, which means a 
master on one CA can access all the data from a master on another CA. In 
that case, you may either be undermining the purpose of having separate CAs 
or not have a good reason to have separate CAs.

But assuming this really is what you want, you should be able to accomplish 
it by using an SSL termination proxy configured to present different 
certificates to different clients.
 

 -- 
 I've seen things you people wouldn't believe. Attack ships on fire off 
 the shoulder of Orion. I watched C-beams glitter in the dark near the 
 Tannhauser gate. All those moments will be lost in time... like tears 
 in rain... Time to die. 


-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [Puppet Users] Announce: PuppetDB 1.3.0 Available

2013-05-08 Thread Nick Lewis
On Wednesday, May 8, 2013 5:31:16 AM UTC-7, Erik Dalén wrote:

 On Tuesday 7 May 2013 at 01:44, Chris Price wrote: 
  * Report queries 

  The query endpoint `experimental/event` has been augmented to support a 
   
  much more interesting set of queries against report data. You can now 
 query 
  for events by status (e.g. `success`, `failed`, `noop`), timestamp 
 ranges, 
  resource types/titles/property name, etc. This should make the report 
  storage feature of PuppetDB *much* more valuable! 

 Very nice news.   

 But is this planned to get some further extensions? Some queries I would 
 like to make still seem quite hard (at least to do in a single query). 

 For example finding all nodes that failed their last puppet run seems like 
 it would need one node query and then a event query for each one. 

 Will there be better support for subqueries across reports, events and the 
 other endpoints? That would make some types of queries easier. For example 
 you could make a single query to get the puppet version of all nodes that 
 failed any resource within the last 30 mins. 


This is definitely unfinished (it's under /experimental). We plan to at 
least add subqueries like you get with resources/facts, and possibly 
counting, grouping, and selecting a subset of columns. If you have any 
further use cases that those features still wouldn't address, let us know. 
We want to make this API as robust as we can; until we do, we thought it 
was best to get what we already had out for users to try.
 

 Any suggestions for nifty syntax for puppetdbquery to query stuff like 
 that? :) 


I'll get back to you on that. :)
 

 --   
 Erik Dalén 




-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[Puppet Users] Re: Attributes in user resource causing an error (PE 2.8.1)

2013-04-19 Thread Nick Lewis

On Friday, April 19, 2013 11:58:27 AM UTC-7, Matt Hargrave wrote:

 I am trying to use the attributes field for AIX user attributes.  I 
 currently have:

 user { test1:
 ensure = present,
 uid = '123456',
 gid = 'system',
 shell = '/bin/ksh',
 home = '/home/test1',
 attributes = [login=true, rlogin = true],
 managehome = true,
 }



 when I apply this I get 

 err: /Stage[main]//User[test1]: Could not evaluate: private method 
 `select' called for nil:NilClass


 I have tried to just have a single attribute with and without the brackets 
 [].  As soon as a remove that field everything else works. 


  
From the code, it looks like this needs to be specified as a hash. Could 
you try that and let me know if it works?

Strangely enough, there seems to be some code which complains if it's not a 
hash, but does so by telling you it needs to be a list of key=value pairs. 
That should definitely be fixed, at least.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[Puppet Users] Re: Help me (fully) clear out stored configs from PuppetDB Postgresql

2013-04-04 Thread Nick Lewis
On Thursday, April 4, 2013 1:08:06 PM UTC-7, Michael O'Dea wrote:

  exasperated sigh 

 So, the hosts in question have, as part of their unique hostnames, a small 
 hex string which is in uppercase.  In the Nagios view where I was seeing 
 these offline hosts, the hostname's case was preserved.  Within PuppetDB 
 however, the hostname is all lowercase.  As a consequence, these hosts were 
 not removed because instead of deactivating that host, a *new* certname 
 entry was created, with my mixed-case value, and *that* entry was marked 
 as deactivated.  The actual host was still humming right along.

 Well, I guess I learned something.  Technically there's a bug there -- but 
 I've lost a whole day on this issue so I'm going to refrain from reporting 
 it at the moment.  Hope this helps someone else.  If you're deactivating a 
 host with a capital letter, odds are good you're not deactivating what you 
 think you're deactivating.  


I'll join in that exasperated sigh with you. :) This issue pops up fairly 
commonly (albeit usually with hostname vs. fqdn). I remember a conversation 
on irc about having the `puppet node deactivate` command first check if the 
node exists. It could then fail if the node doesn't (because you probably 
just got the name wrong), or have an option to force (if for some reason 
you want to proactively tell PuppetDB a node exists, but is inactive). 
Potentially, we could also perform a regex query and make suggestions. I'm 
not sure what happened with that idea, but I guess it didn't go anywhere. 
At any rate, I think the frustration of accidentally deactivating the wrong 
nodes far outweighs whatever rationale someone might have for deactivating 
a node that doesn't exist. This needs to change.

I'm sorry for your trouble. I'll make sure to follow up on making that 
change, this time.
 


 On Thursday, April 4, 2013 3:19:00 PM UTC-4, Michael O'Dea wrote:

 Hi all,

 I recently started using the Nagios resources to populate a monitoring 
 server.  I cycle hosts fairly quickly in my environment so already I've had 
 five hosts come and go from under Nagios.  After a fair amount of searching 
 I discovered puppet node deactivate and performed same on the removed 
 hosts -- I'm still trying to figure out how to make that part of the 
 process going forward -- but somehow only one of my five hosts ended up 
 being removed from my Nagios configs.  I am using resource { purge = true 
 } for the affected resource types.  

 So here's where I'm at -- if I run puppet node status node on any of 
 these missing hosts, they appear as deactivated.  If I clear out my 
 nagios config files and re-run puppet agent, the nodes and their services 
 (I did have an exported nagios_service resource when these hosts were 
 alive, which I've since removed -- in case that matters) will re-appear. 
  I've tried *puppet node clean*, *puppet node destroy*, *puppet node 
 deactivate*, with and without terminus=puppetdb.  I can see in the 
 puppetdb log that it has received multiple deactivate commands for these 
 hosts.  Nonetheless, the items are still appearing when the nagios host 
 performs a collection.  I've got to put an end to that! 

 An example of puppet node status for one of the affected:

 # puppet node status [badnodename]
 [badnodename]
 Deactivated at 2013-04-03T23:00:55.349Z
 No catalog received
 No facts received


 One interesting bit.  First, when I run puppet agent --test, during 
 catalog compilation I _was_ seeing the following for all four affected 
 hosts:  

 warning: Nagios_service check_ssh_[badhost] found in both naginator and 
 naginator; skipping the naginator version


 Out of frustration, I disabled storedconfigurations on the puppet master 
 and restarted it.  After a few catalogs had processed, I updated Nagios 
 after manually purging the hosts file.  As expected, all my host and 
 service definitions disappeared.  I then re-enabled storedconfigurations, 
 fingers-crossed that the old host defs would be gone -- to my dismay, they 
 reappeared on the *second* catalog run after I brought stored_configs back 
 online.  Now, I no longer get the error message above, but I do still get 
 the deactivated hosts and associated services.  

 At this point, I'd be happy to simply wipe all of my existing stored 
 configurations, as I suspect these four hosts have gotten themselves into 
 some kind of limbo.  However, while the instructions for removing stored 
 config data from the old MySQL puppet db is quite straightforward, it 
 doesn't seem to map to the postgresql DB schema, and I can't find any 
 advice on how to go about wiping it there.  I'd prefer not to just scrap my 
 database, but I'm getting closer to that point.

 Any help would be appreciated.



-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.

[Puppet Users] Re: PuppetDB API permissions

2012-10-26 Thread Nick Lewis
On Friday, October 26, 2012 7:24:18 AM UTC-7, ak0ska wrote:

 Hello,

 Is it possible to control from which nodes is it allowed to execute 
 commands like replace catalog and replace facts, and which nodes can 
 only do queries (but no changes)? It seems like once someone could access 
 the service through http or https (depending on jetty.ini settings) can do 
 both.


Unfortunately, this isn't currently possible, though it's certainly 
something we'd like to provide in the future. Currently the only 
restriction that can be made is a whitelist of certnames which are allowed 
to talk to the API, for both read and write alike.

Until this is supported by PuppetDB itself, you could use a proxy to allow 
only certain routes.

If we were to add this feature, would it be sufficient to just have no 
access, read access, and read/write access as categories, or would you 
need something more granular than that (for instance, can query metrics but 
not facts)?

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/6rioj916zpAJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: [Puppet-dev] Announce: PuppetDB 1.0 Available

2012-09-21 Thread Nick Lewis
On Thursday, September 20, 2012 10:44:34 PM UTC-7, Erik Dalén wrote:

 Great work! 

 But what are the changes since 0.11.0? 


We've been treating our releases until now as betas/release candidates, so 
actually 1.0 is just a promotion of 0.11.0. It's identical in features. I 
guess we didn't really make that clear. The main reasons for this release 
are to formalize the API and to send the message that it's ready for 
general production use.
 

 It would also be interesting to see a roadmap of post 1.0 features 
 that are planned. 


We're looking for ways to be more transparent about development. To that 
end, we recently opened our development board on Trello to the public:

http://links.puppetlabs.com/puppetdb-trello

Caveats are that those are mostly low-level tasks, rather than a general 
high-level overview, and the backlog is largely unprioritized. But until we 
have something better, that at least shows what we're working on at any 
given time. Anyone is free and encouraged to comment and/or vote on cards. 
And if you're looking to get involved with development, it's a good place 
to start.

In lieu of a formal roadmap, I can tell you that the general plan for the 
next few months includes better fact queries, report storage, possibly 
historical catalog storage, and what we're calling Grand Unified Query 
(queries for any kind of data based on every other kind of data). And, as 
always, making the whole thing even faster and more space-efficient. Of 
course, having said that, I can't promise the plan won't change wildly at 
any moment. :)

If you have any ideas about how we can better communicate this information, 
or other information you'd like available, I'd love to hear them.

Nick Lewis
 


 On 21 September 2012 02:03, Moses Mendoza mo...@puppetlabs.comjavascript: 
 wrote: 
  PuppetDB 1.0 is now available! 
  
  PuppetDB, a component of the Puppet Data Library, is a centralized 
 storage 
  daemon for auto-generated data. This initial release of PuppetDB targets 
 the 
  storage of catalogs and facts. 
  
  Much more information is available on the Puppet Labs blog: 
  
  http://puppetlabs.com/blog/introducing-puppetdb-put-your-data-to-work/ 
  
  ...and on the docs site: 
  
  http://docs.puppetlabs.com/puppetdb/1/ 
  
  ...and there will also be 2 talks involving PuppetDB at next week's 
 PuppetConf. 
  
  What can it do for you? 
  
  *  It’s a drop-in, 100% compatible replacement for storeconfigs 
  *  It’s a drop-in, 100% compatible replacement for inventory service 
  *  It's already in production at many sites, handling thousands of nodes 
 and 
 millions of resources 
  *  It hooks into your Puppet infrastructure using Puppet’s pre-existing 
 extension points (catalog/facts/resource/node terminuses) 
  *  It’s much faster, much more space-efficient, and much more scalable 
 than current storeconfigs and the current inventory service. 
 *  We can handle a few thousand nodes, with several hundred 
resources each, with a 30m runinterval on our laptops during 
development. 
  *  It stores the entire catalog, including all dependency and 
 containment information 
  *  It exposes well-defined, HTTP-based methods for accessing stored 
 information 
  *  Documented at http://docs.puppetlabs.com/puppetdb 
  *  It presents a superset of the storeconfigs and inventory service 
 APIs for use in scripts or by other tools 
 *  In particular, we support arbitrarily nested boolean operators 
  *  It decouples catalog and fact storage from the compilation process 
 *  PuppetDB obsoletes previous puppetq functionality 
  *  It works Very Hard to store everything you send it; we auto-retry 
 all storage requests, persist storage requests across restarts, 
 and preserve full traces of all failed requests for post-mortem 
 analysis 
  *  It’s secured using Puppet’s built-in SSL infrastructure 
  *  It’s heavily instrumented and easy to integrate its performance info 
 into 
 your monitoring frameworks 
  
  We encourage you to try it out, hammer it with data, and let us know 
  if you run into any issues! 
  
  # Downloads 
  
  Available in native package format at 
  
  http://yum.puppetlabs.com 
  
  http://apt.puppetlabs.com 
  
  Source (same license as Puppet):  http://github.com/puppetlabs/puppetdb 
  
  Available for use with Puppet Enterprise 2.5.1 and later at 
  
  http://yum-enterprise.puppetlabs.com/ and 
 http://apt-enterprise.puppetlabs.com/ 
  
  Puppet module (Puppet Enterprise support is forthcoming): 
  
  http://forge.puppetlabs.com/puppetlabs/puppetdb 
  
  # Documentation (including how to install): 
 http://docs.puppetlabs.com/puppetdb 
  
  # Issues can be filed at: 
  http://projects.puppetlabs.com/projects/puppetdb/issues 
  
  # See our development board on Trello: 
  http://links.puppetlabs.com/puppetdb-trello 
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups Puppet Developers

[Puppet Users] Re: [Puppet-dev] Re: Changes to allowed function calls in Puppet 3.0

2012-09-14 Thread Nick Lewis
On Friday, September 14, 2012 at 1:55 PM, Alessandro Franceschi wrote:
 Hi Andrew,
 thank you for the notice (and thanks to Ken B. for informing me about it).
 In my modules I have tons of calls to custom functions in the *arguments* of 
 my classes.
 Things like:
 class apache (
   $my_class= params_lookup( 'my_class' ),
   $source  = params_lookup( 'source' ),
   $source_dir  = params_lookup( 'source_dir' ),
 [...]
 that, I guess, should be something like 
 class apache (
   $my_class= params_lookup( [ 'my_class' ] ),
   $source  = params_lookup( [ 'source' ] ),
   $source_dir  = params_lookup( [ 'source_dir' ]),
 [...]
 Now, my question is:
 you wrote that this conversion from string to array is needed when calling 
 custom functions in templates or other functions, but not in Puppet DSL.
 Is the conversion required  also for the class/define arguments list (which 
 might be considered somehow a border case)?
 
This code doesn't need to change. The reason for this issue in the first place 
is that Puppet's calling convention for functions is to wrap all the arguments 
in an array and pass that as a single argument to the corresponding Ruby 
method. So when calling the Ruby method directly, you need to also wrap your 
arguments in an array. This is still Puppet code, so it's fine.
 
 
 Any info is welcomed.
 
 Best regards
 Alessandro Franceschi
 Example42.com (http://Example42.com)
 
 
 On Friday, September 14, 2012 8:22:59 PM UTC+2, Andy Parker wrote:
  This is a heads up to anyone who has written code that calls custom 
  functions. We are making a change in Puppet 3.0 that will make the 
  calls reject incorrect calls (see bug #15756). Calling functions from 
  ruby code (either other functions or erb templates) was always 
  supposed to be done by placing all of the arguments in an array and 
  passing the array to the method call. This was using done by 
  surrounding the arguments with square brackets. 
  
  function_template( [ 'my_template.erb' ] ) 
  
  Some, but not all, functions had been written in a way that would by 
  chance work when this was not done. The template function is one such 
  example. It would work if you were running on a 1.8 ruby if it was 
  called as: 
  
  function_template( 'my_template.erb' ) 
  
  However, if you tried running on a 1.9 ruby that function call would 
  fail with an error about not having a method named 'collect' on a 
  String, which is caused by a change in the String class in ruby. To 
  prevent these kinds of errors in the future, Puppet 3.0 is going to 
  check that the arguments are passed in an array and fail if they are 
  not. 
  
  I did a quick check across the code available in the Forge and it 
  doesn't look like it was too common that this was done wrong, but you 
  might want to check through your code for calls of functions where the 
  arguments are not being passed in an array, and change them to use an 
  array. 
  
  NOTE: If you have only ever called custom functions from inside the 
  Puppet Language, then you don't need to worry about anything, this 
  does not apply to that. 
  
  Thanks, 
  Andrew Parker 
  Puppet Team Lead 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Puppet Developers group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/puppet-dev/-/Fbji-OwL4I4J.
 To post to this group, send email to puppet-...@googlegroups.com 
 (mailto:puppet-...@googlegroups.com).
 To unsubscribe from this group, send email to 
 puppet-dev+unsubscr...@googlegroups.com 
 (mailto:puppet-dev+unsubscr...@googlegroups.com).
 For more options, visit this group at 
 http://groups.google.com/group/puppet-dev?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: PuppetDB Replication

2012-09-06 Thread Nick Lewis
On Thursday, September 6, 2012 11:03:13 AM UTC-7, tsuave wrote:

 I have puppetdb setup on our puppetmaster with a postgreSQL DB setup on 
 two servers db1 and db2. I am trying to setup replication between db1 and 
 db2, using rubyrep. Rubyrep can copy the data but not the schema. I tired 
 to dump the schema of a DB after puppet has connected to use as a template 
 to create both DBs, start the replication, and then connect the puppetdb. 
 Unfortunately, when puppetdb connects to a db that already has a 
 schema/tables created, I get the following:

 2012-09-06 10:44:51,102 ERROR [main] [puppetlabs.utils] Uncaught exception
 java.lang.NullPointerException
 at clojure.lang.Numbers.ops(Numbers.java:942)
 at clojure.lang.Numbers.gt(Numbers.java:227)
 at 
 com.puppetlabs.puppetdb.scf.migrate$pending_migrations$fn__1296.invoke(migrate.clj:184)
 at clojure.core$filter$fn__3830.invoke(core.clj:2478)
 at clojure.lang.LazySeq.sval(LazySeq.java:42)
 at clojure.lang.LazySeq.seq(LazySeq.java:60)
 at clojure.lang.RT.seq(RT.java:466)
 at clojure.core$seq.invoke(core.clj:133)
 at clojure.core$reduce.invoke(core.clj:5994)
 at clojure.core$into.invoke(core.clj:6005)
 at 
 com.puppetlabs.puppetdb.scf.migrate$pending_migrations.invoke(migrate.clj:184)
 at 
 com.puppetlabs.puppetdb.scf.migrate$migrate_BANG_.invoke(migrate.clj:190)
 at 
 com.puppetlabs.puppetdb.cli.services$_main$fn__8398.invoke(services.clj:250)
 at 
 clojure.java.jdbc.internal$with_connection_STAR_.invoke(internal.clj:186)
 at 
 com.puppetlabs.puppetdb.cli.services$_main.doInvoke(services.clj:249)
 at clojure.lang.RestFn.invoke(RestFn.java:421)
 at clojure.lang.Var.invoke(Var.java:405)
 at clojure.lang.AFn.applyToHelper(AFn.java:163)
 at clojure.lang.Var.applyTo(Var.java:518)
 at clojure.core$apply.invoke(core.clj:600)
 at com.puppetlabs.puppetdb.core$_main.doInvoke(core.clj:80)
 at clojure.lang.RestFn.applyTo(RestFn.java:137)
 at com.puppetlabs.puppetdb.core.main(Unknown Source)
 2012-09-06 10:44:51,106 INFO  [Thread-4] [cli.services] Shutdown request 
 received; puppetdb exiting.

 Also, If we connect puppetdb to a blank db and it creates any data, the 
 replication will not work because the data will be duplicated. There seems 
 to be a way to do if if we connect to db1 and then db2 without any nodes 
 checking in, it should work fine from there. We want to automate the builds 
 and replication, so going through these motions would be hard to automate, 
 and if a node checks in while either db is connect and data is created, it 
 wont replicate. 

 Does anyone know of a way I can create the puppetdb schema without 
 connecting the puppetdb service to the empty database that will not cause 
 the above exception? I would like to create puppetdb on db1 and db2, load 
 the schema, start the replication and then connect puppetdb to a 
 load balancer that can choose either db1 or db2 and work correctly because 
 of replication. 

 Anyone have a better idea than rubyrep?


PuppetDB uses the schema_migrations table to determine the migrated state 
of the database. If the table doesn't exist, it assumes the database needs 
to be fully created from the beginning. But it's not properly handling the 
case where the table exists but has no data. However, I think the right 
thing for it to do in that case is explicitly fail, noting that the 
database is in an invalid state. So either way, this isn't going to work.

The best way to create the database would be to run PuppetDB against it. 
Failing that, you should probably replicate the data *before* starting 
PuppetDB against the new database.
 


 Thanks!



-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/RfoQeUVu1f0J.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Access @resouce in custom type

2012-07-31 Thread Nick Lewis
On Tuesday, July 31, 2012 2:05:28 PM UTC-7, ZJE wrote:

 Is it possible to access @resource variables inside a type?

 I would like to make some decisions on parameters based on other 
 parameters that may have already been set.

 For example,
 ---
   newparam(:param1) do
 Puppet.debug Found drivesperarray parameter
 desc parameter 1
 validate do |value|
   if resource[:otherparam] then
 #dosomething
   else
 resource[:param1] = 0
   end
 end
 Puppet.debug Parameter 1 is: #{@resource[:param1]}
   end
 ---

 But I keep getting messages like undefined method `[]' for nil:NilClass 

 Anyone have experience with this? I've tried searching around for example 
 without much luck...


It sounds like what you actually want is a munge block, which is used to 
change the value of the parameter.

munge do |value|
  if resource[:otherparam] then
#dosomething
  else
0
  end
end

validate should be used only to raise an exception if the value is invalid. 
Puppet will call validate and then munge. Also, parameters are set in the 
order they're defined in the type/type.rb file, and validated/munged 
before moving on to the next parameter. So a parameter can only depend on 
the values of parameters that come *before* it.
 

 Thanks!


-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/vNg7G08OxowJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: can't authenticate based on IP? what? huh?

2012-07-20 Thread Nick Lewis
On Tuesday, July 17, 2012 3:46:21 PM UTC-7, Jo wrote:

 Okay, I totally did see this in the release notes but I read it that you 
 weren't allowing certificates with IP addresses in them, not that you 
 wouldn't allow IP authentication in auth.conf at all.  

 Jul 17 14:52:46 sj2-puppet puppet-master[13998]: Authentication based on 
 IP address is deprecated; please use certname-based rules instead

 I don't feel that it is reasonable to expect that every puppet customer 
 match up their naming scheme to their IP blocks, nor to want to list every 
 possible naming scheme in their authorization list when an IP bitmask will 
 do the job much more simply.

 I don't mind or care about IPs in certificates--I've never seen this, and 
 don't expect to. But disallowing IP-based authentication is going to be 
 very difficult at many sites, and possibly allow things which were never 
 intended. Please reconsider this.


This is actually something of a misleading deprecation warning, I'm afraid. 
The change we plan to make is to distinguish allow and allow_ip, to 
avoid confusing IPs and certnames. So the change you will need to make is 
to explicitly use allow_ip if you want to do IP-based authentication. 
However, adding that feature to 2.7.x, though backward compatible, turns 
out to require a fairly significant rework of some of the auth code, which 
is a risk we don't feel is appropriate. So the feature won't be in until 3, 
at which point it will be required.

That means we're in the awkward position of issuing a warning you can't 
actually fix yet, which is *really* not something we like to do. But it 
seems better to at least give some alert that you'll need to make a change 
in the future than to have it suddenly occur without forewarning. So yes, 
there's definitely a bit of an issue here, but I assure you we don't intend 
to remove IP-based authentication entirely.

Nick Lewis
 

 -- 
 Jo Rhett
 Net Consonance : net philanthropy to improve open source and internet 
 projects.


  


-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/DtGsIKqCOTsJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Announce: PuppetDB 0.9.0 (first release) is available

2012-05-23 Thread Nick Lewis
On Wed, May 23, 2012 at 7:30 PM, Sean Millichamp s...@bruenor.org wrote:
 On Wed, 2012-05-23 at 06:24 -0700, jcbollinger wrote:

 That understanding of storeconfigs looks right, but I think the
 criticism is misplaced.  It is not Deepak's line of thinking that is
 dangerous, but rather the posited strategy of purging (un)collected
 resources.  Indeed, I rate resource purging as a bit dangerous *any*
 way you do it.  Moreover, the consequences of a storeconfig DB blowing
 up are roughly the same regardless of the DBMS managing it or the
 middleware between it an the Puppetmaster.  I don't see how the
 existence of that scenario makes PuppetDB any better or worse.

 Indeed, it *is* dangerous, but so are many things we do as system
 administrators. The key is in gauging the risk and then choosing the
 right path accordingly.  In my environment I am not always able to know
 the complete history of resources as changes may come from unexpected
 places. It is less than ideal, but it is one aspect of my reality. In
 that situation, the selective use of purging becomes quite key in
 keeping things that need to be cleaned up cleaned up.

 I don't put anything in exported resources with purging that would be
 capable of bringing down a production application, thankfully, but there
 is quite a bit that could quite possibly cause a variety of headaches,
 alerts, and tickets on a massive scale for a while during the
 reconvergence.

 In additioanl, we are in a transition to PE and the Compliance tool will
 allow me another way of handling that in a more manual admin-review
 approach (to catch resources that get added outside of Puppet's
 knowledge).

 What I really need is some tool by which I can mark exported resources
 as absent instead of purging them from the database when they are no
 longer needed (such as deleting a host).  That would eliminate most, if
 not all, of the intersections of purging and exported resources that I
 have.  Right now I use a Ruby script I found quite a while back to
 delete removed nodes and all of their data.  I'm sure there is a way to
 mark the resources as ensure = absent instead, but I've not gone
 digging into the DB structure.

We don't yet have such a tool for PuppetDB, but it's definitely on our
radar. The current `puppet node clean --unexport` reaches directly
into the ActiveRecord storeconfigs database to make ad hoc changes to
resources, which is inappropriate for PuppetDB, which has a strict
catalog lifecycle. We're working to figure out an appropriate way to
provide the same functionality.


 If you cannot afford to wait out a repopulation of some resource, then
 you probably should not risk purging its resource type.  If you do not
 purge, then a storeconfig implosion just leaves your resources
 unmanaged.  If you choose to purge anyway then you need to understand
 that you thereby assume some risk in exchange for convenience;
 mitigating that risk probably requires additional effort elsewhere
 (e.g. DB replication and failover, backup data center, ...).

 Indeed, as I said above, it is about risk management. Deepak's statement
 I had responded to wasn't the first time I had read the oh, just wait
 for it to repopulate statement and I wanted to be certain that wasn't
 actually something that was considered in the design with regards to
 updates, etc. on the stability of the storeconfigs data.

We definitely didn't take safe repopulation as a given. We know many
if not most storeconfigs users will likely suffer at least some
inconvenience or at worst some outages if their data has to be
repopulated; we're not blasé about the issue. We haven't cut any
corners in PuppetDB around safeguarding your data. It's simply a
design ideal we would like to promote. When it's reasonable to design
your exports/collects thusly, it's beneficial for storeconfigs data to
be easily regenerable. After all, that's what Puppet purports to allow
you to do with your infrastructure, and it would be great not to allow
storeconfigs to disrupt that ability. And on that note, where you find
a case that this just isn't possible today, let us know. I'd love for
this to be the norm.

Mostly the reason for mentioning it is because many people hear
database and automatically think oh great now I have to set up
replication, backups, failover, etc. But before going off and doing
all that work, it's important to ensure this really is data you care
about replicating, backing up, making highly available, etc. Depending
on your needs (for instance, if you're not a storeconfigs user at
all), the answer *may* be no.


 At some point you have to trust tools that have earned that trust
 (either via testing or real world use or both) to do the job that they
 say they are going to do. Puppet has years of earning that trust with
 me. Could something corrupt and destroy the database and cause me a lot
 of trouble? Sure, but that could be said of many tools. That's why we
 have backups, DR systems, etc. even though the 

Re: [Puppet Users] Re: Announce: PuppetDB 0.9.0 (first release) is available

2012-05-22 Thread Nick Lewis
On Tuesday, May 22, 2012 8:26:22 AM UTC-7, Brice Figureau wrote:

 On Mon, 2012-05-21 at 15:39 -0600, Deepak Giridharagopal wrote: 
  On Mon, May 21, 2012 at 2:04 PM, Marc Zampetti 
  marc.zampe...@gmail.com wrote: 

 Why wouldn't a DB-agnostic model be used? 
  
  
  The short answer is performance. To effectively 
  implement things we've 
  got on our roadmap, we need things that (current) 
  MySQL doesn't 
  support: array types are critical for efficiently 
  supporting things 
  like parameter values, recursive query support is 
  critical for fast 
  graph traversal operations, things like INTERSECT are 
  handy for query 
  generation, and we rely on fast joins (MySQL's nested 
  loop joins don't 
  always cut it). It's much easier for us to support 
  databases with 
  these features than those that don't. For fairly 
  divergent database 
  targets, it becomes really hard to get the performance 
  we want while 
  simultaneously keeping our codebase manageable. 
  
  
  I understand the need to not support everything. Having 
  designed a number of systems that require some of the features 
  you say you need, I can say with confidence that most of those 
  issues can be handled without having an RDBMS that has all 
  those advanced features. So I will respectfully disagree that 
  you need features you listed. Yes, you may not be able to use 
  something like ActiveRecord or Hibernate, and have to 
  hand-code your SQL more often, but there are a number of 
  techniques that can be used to at least achieve similar 
  performance characteristics. I think it is a bit dangerous to 
  assume that your user base can easily and quickly switch out 
  their RDBMS systems as easy as this announcement seems to 
  suggest. I'm happy to be wrong if the overall community thinks 
  that is true, but for something that is as core to one's 
  infrastructure as Puppet, making such a big change seems 
  concerning. 
  
  
  We aren't using ActiveRecord or Hibernate, and we are using hand-coded 
  SQL where necessary to wring maximum speed out of the underlying data 
  store. I'm happy to go into much greater detail about why the features 
  I listed are important, but I think that's better suited to puppet-dev 
  than puppet-users. We certainly didn't make this decision cavalierly; 
  it was made after around a month of benchmarking various solutions 
  ranging from traditional databases like PostgreSQL to document stores 
  like MongoDB to KV stores such as Riak to graph databases like Neo4J. 
  For Puppet's particular type of workload, with Puppet's volume of 
  data, with Puppet's required durability and safety requirements...I 
  maintain this was the best choice. 
  
  While I don't doubt that given a large enough amount of time and 
  enough engineers we could get PuppetDB working fast enough on 
  arbitrary backing stores (MySQL included), we have limited time and 
  resources. From a pragmatic standpoint, we felt that supporting a 
  database that was available on all platforms Puppet supports, that 
  costs nothing, that has plenty of modules on the Puppet Forge to help 
  set it up, that has a great reliability record, that meets our 
  performance needs, and that in the worst case has free/cheap hosted 
  offerings (such as Heroku) was a reasonable compromise. 

 I didn't had a look to the code itself, but is the postgresql code 
 isolated in its own module? 

 If yes, then that'd definitely help if someone (not saying I'm 
 volunteering :) wants to port the code to MySQL. 

 On a side note, that'd be terriffic Deepak if you would start a thread 
 on the puppet-dev explaining how the postgresql storage has been done to 
 achieve the speed :) 


I'm working on putting together an in-depth look into the technology inside 
PuppetDB, as well as everything we've done to make it fast. That should be 
coming soon.
 

 -- 
 Brice Figureau 
 Follow the latest Puppet Community evolutions on www.planetpuppet.org! 



-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/y9AAD02ZVYwJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Dashboard not retrieving inventory

2011-07-22 Thread Nick Lewis
On Fri, Jul 22, 2011 at 10:27 PM, Khoury Brazil khoury.bra...@gmail.com wrote:
 Hi All,

 Puppet-dashboard appears to be having some trouble. Under inventory, it says:
 Could not retrieve facts from inventory service: Permission denied -
 certs/dashboard.private_key.pem

 When I run:
 curl -k -H Accept: yaml https://puppet:8140/production/facts/host.domain
 I get the expected dump of facts.

 Versions:
 puppet-dashboard is 1.1.0 (using passenger)
 puppet-master is 2.7.1

 Went with an extremely loose config on test:
 Puppet Master:
 auth.conf:
 path /facts
 method find, search
 auth any
 allow *

 puppet.conf:
 # Reporting
 reporturl = http://puppetdashboard.domain/reports/upload
 # Testing, to be changed to DB for prod
 facts_terminus = yaml


 Puppet Dashboard:
 settings.yml:
 # The inventory service allows you to connect to a puppet master to
 retrieve and node facts
 enable_inventory_service: true
 # Hostname of the inventory server.
 inventory_server: 'puppet'
 # Port for the inventory server.
 inventory_port: 8140

 Any ideas? I'm stumped at this point. It almost seems like the
 dashboard isn't asking for inventory at all. I've restarted all
 services with no change, on both the master and dashboard hosts.


From the Permission denied - certs/dashboard.private_key.pem
message, it looks like the user as which Dashboard is running is
unable to read its certs directory. Did you maybe run the cert-related
rake tasks as root, when Dashboard runs as another user? Make sure
that directory is readable by the appropriate user.

 Thanks,
 Khoury

 --
 You received this message because you are subscribed to the Google Groups 
 Puppet Users group.
 To post to this group, send email to puppet-users@googlegroups.com.
 To unsubscribe from this group, send email to 
 puppet-users+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/puppet-users?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppetrun/puppet kick

2011-07-14 Thread Nick Lewis

On Tuesday, July 12, 2011 at 4:21 PM, Craig White wrote:

 Can't seem to make it work
 
 puppet 2.6.8 (client/server)
 
 # puppet kick -f ubuntu4.ttinet
 Triggering ubuntu4.ttinet
 Host ubuntu4.ttinet failed: Error 400 on SERVER: 'save ' is not an allowed 
 value for method directive
 ubuntu4.ttinet finished with exit code 2
 Failed: ubuntu4.ttinet
 
 root@ubuntu4:~# cat /etc/puppet/auth.conf
 path /run 
  method save 
  allow *
 
 root@ubuntu4:~# cat /etc/puppet/namespaceauth.conf
 [puppetrunner]
  allow *
 
 root@ubuntu4:~# grep listen /etc/puppet/*
 /etc/puppet/puppet.conf: listen = true
 
It looks like you have a trailing space on your method save line, and Puppet 
is taking that to mean method save  (note the space). Remove that and you 
should be okay.

See ticket #5010.
http://projects.puppetlabs.com/issues/5010

 -- 
 Craig White ~~ craig.wh...@ttiltd.com 
 (mailto:craig.wh...@ttiltd.com)
 1.800.869.6908 ~~~ www.ttiassessments.com 
 (http://www.ttiassessments.com) 
 
 Need help communicating between generations at work to achieve your desired 
 success? Let us help!
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Puppet Users group.
 To post to this group, send email to puppet-users@googlegroups.com 
 (mailto:puppet-users@googlegroups.com).
 To unsubscribe from this group, send email to 
 puppet-users+unsubscr...@googlegroups.com 
 (mailto:puppet-users+unsubscr...@googlegroups.com).
 For more options, visit this group at 
 http://groups.google.com/group/puppet-users?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Dashboard resurrecting deleted nodes

2011-07-06 Thread Nick Lewis
On Wed, Jul 6, 2011 at 4:20 AM, Chris Phillips ch...@untrepid.com wrote:
 Hi,
 I was just searching for all systems where selinux is true on Dashboard
 and firstly I got no results, despite there being some (any clues?) but that
 search also seems to have resurrected some nodes I deleted a few weeks ago.
 7 systems instantly appeared under Never reported. I just deleted one, did
 the search again and POW! it's back again.
 Does this sound familiar or should I go open a bug (against 1.1.0)

This is happening because the inventory search will create nodes in
Dashboard corresponding to the nodes retrieved by the search, and the
facts for that node are still present on your master. The ideal
solution would probably be to purge the master of the data for that
node, though someone else will have to speak as to how best to do
that.

On the Dashboard side, you can hide a node rather than deleting it,
which will prevent it from coming back to life this way. Hidden nodes
remain in the system, but are ignored in lists of node statuses,
charts, etc.

 Thanks
 Chris

 --
 You received this message because you are subscribed to the Google Groups
 Puppet Users group.
 To post to this group, send email to puppet-users@googlegroups.com.
 To unsubscribe from this group, send email to
 puppet-users+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/puppet-users?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: [Puppet-dev] Open Source Team planning meeting summary 2011-06-1

2011-06-15 Thread Nick Lewis

On Wednesday, June 15, 2011 at 6:09 PM, Ian Ward Comfort wrote:

 On 15 Jun 2011, at 9:56 AM, Dan Bode wrote:
  I just wanted to clarify something from this email. Although the best way 
  to get traction for a ticket in the future will be voting in the ticketing 
  system, people are always welcome to highlight particular tickets in order 
  to solicit votes.
 
 In that case, I'll make a shameless plug for #6863 -- allow array literals to 
 be given as function arguments in the parser. I'm pretty sure it's just an 
 oversight that this isn't currently allowed, but it leads to ugly workarounds 
 like this:
 
  $array = ['a','b','c']
  $var = myfunction('first arg', $array)
 
  $empty_array = []
  $other_var = myfunction($empty_array)
 
Hmm, this should already be fixed for 2.7.0. I guess I just missed that ticket 
when fixing up the grammar. I'll verify that and update it if so.

 -- 
 Ian Ward Comfort icomf...@stanford.edu (mailto:icomf...@stanford.edu)
 Systems Team Lead, Academic Computing Services, Stanford University
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Puppet Developers group.
 To post to this group, send email to puppet-...@googlegroups.com 
 (mailto:puppet-...@googlegroups.com).
 To unsubscribe from this group, send email to 
 puppet-dev+unsubscr...@googlegroups.com 
 (mailto:puppet-dev+unsubscr...@googlegroups.com).
 For more options, visit this group at 
 http://groups.google.com/group/puppet-dev?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Open Source team road map

2011-06-13 Thread Nick Lewis
For our first few weeks, the open source team has been working on a
fairly disparate selection of highly-voted and otherwise high-priority
tickets. While these are no doubt important, we'd like to shift our
focus to more cohesive, high-level goals (which will encompass many of
the same highly-voted tickets). To that end, we've produced a
rudimentary road map:

http://projects.puppetlabs.com/projects/puppet/wiki/Road_map

We are going to begin working primarily from this road map,
interleaving other high-priority tickets as they arise. As you can
see, the depth of the road map isn't as long-term as we'd like, but
we'll be working on fleshing that out in the near future.

We plan to periodically mail out our updated road map. Please feel
free to respond with any feedback, especially if you know of any
tickets we missed that are related to our broad road map goals, or to
suggest general areas of Puppet that could use improvement.

Here's our current road map:

* Unify and properly use autoloader behavior
  - Sync Puppet features (#5454)
  - Can use Applications via pluginsync (#7316)
  - Plugins should not be able to override core functionality (#4916)
  - Load plugins from gems (#7788)
  - Plugins only loaded once (#3741)
  - Unused plugins don't affect Puppet
  - Per environment plugins (#7703, #4656)
  - Plugins accessible from the master (#4409, #4248)
  - Enforce naming conventions for autoloading manifests (#5041, #5043, #5044)

* Types and providers v2
  - Deprecate type-centric API (types must have 1+ provider)
  - Add providers for core types that don't have them
  - Clear separation between model and implementation
  - Parameter validation 100% on agent
  - Action-oriented providers (used easily from Ruby  irb)
  - Lazily evaluate provider suitability (features and commands) (#2384, #6907)

* Graph-related
  - Group package installations together (#3156, #2198)
  - Have both dependency and ordering edges
  - Above/below relationships

* Transient resource states
  - Intermediate states
  - Windows support

We'll be sending out our current iteration backlog on Wednesday, as
part of our usual updates. The only significant change as of today is
that we've replaced the package type v2 entries with items from the
top of our road map.

Nick Lewis

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Package type: enable/disable repo vs options (#2247 vs #4113)

2011-06-02 Thread Nick Lewis

On Thursday, June 2, 2011 at 10:16 AM, Jacob Helwig wrote:

 We currently have to feature requests to add similar (or at least
 overlapping) functionality to the Package type.
 
 #2247[1] - enablerepo and disablerepo for yum type
 
 #4113[2] - Provide a generic options-style parameter for packages.
 
 It seems like having #4113 would remove the need for having #2247, but I
 wanted to get some wider opinions to make sure I wasn't ignoring some
 use-case that would make this not the case.
 
 Personally, I think we should move forward on #4113 instead of #2247,
 since #4113 seems like the more general solution, and isn't tied
 directly to the yum provider.
 
 #2247 does currently have some code submitted for it, however it
 requires a signed CLA before we can accept it. While no code has been
 written for #4113 yet, it doesn't look like it would actually be that
 much work to do.
 
 Thoughts? Opinions? Comments?
 
It looks like the crux of this problem is that many, many providers add their 
own, fairly unique, capabilities. We then try to model all of these 
capabilities in the type, and end up with a package type that has ~15 
parameters, many of which are ignored on almost all providers, and no 
properties.

And yet we have no real ability to add provider-specific attributes, aside from 
adding them to the type, with an associated feature, and declaring our provider 
as the only one that supports that feature. So my generic proposal is that we 
add some better way to do that. To keep the definition of a resource consistent 
across providers, this should only allow additional parameters (data) to be 
specified, and not properties.

I have a few ideas for this:

1) Add an 'options' or 'data' or similar metaparameter which accepts a hash.

This would basically be a place to add arbitrary data accessible to the 
provider. Thus, for the enablerepo example, it would just be a key/value in the 
options hash. Any provider is free to use or ignore it as desired. A big 
problem with this is there's no real validation for which keys are allowed, or 
what they must look like, which leads us to:

2) As 1, but with an ability for a provider to specify the options acceptable.

In this case, a provider would have some method for declaring a legal option, 
and its validation and/or munging. But in this case, what's the difference 
between a parameter and an option? Apparently only where/how we declare and 
specify them. Although, there may be some benefit to distinguishing generic 
type parameters from provider-specific options.

3) As 2, but remove the concept of parameters.

This is one possible way to reconcile the difference between parameters and 
options. But is there really an advantage to wrapping all of our data which is 
essentially parameters in a hash? Maybe, for distinguishing parameters from 
properties, but probably not.

4) As 2, but instead using something like newparam on provider.

This is similar to the previous idea, in that it unifies options and 
parameters, but in the other direction. In addition to specifying generic 
type parameters, also add the ability to specify provider-specific parameters. 
This has the advantage of not requiring any changes to existing manifests using 
provider parameters. It has the disadvantage that we can't really validate 
provider parameters on the master (though we've talked about removing 
validation on the master anyway).

Since I can't even decide which of my own four suggestions I prefer, please 
poke holes in as many of them as you can to ease my mental burden. :)

 [1] http://projects.puppetlabs.com/issues/2247
 [2] http://projects.puppetlabs.com/issues/4113
 
 -- 
 Jacob Helwig

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Removing links

2011-05-31 Thread Nick Lewis

On Tuesday, May 31, 2011 at 2:58 AM, John Kennedy wrote:

 I have a group of web servers being load balanced. I have 4 types of servers 
 all build from the same image. 
 When I build the image I forgot to clear out one of the sym links from 
 sites-enabled to sites-available. This is causing problems with the web 
 servers. I have tried to have puppet remove the link but have had little 
 success. I have tried the following:
 file { /opt/nginx/sites-enabled/site file: ensure = absent }
 This will remove a file if it is there but not this link.
 What am I missing? I have googled to get the above which I thought would 
 remove the link.
 Thanks,
 John
 
Which Puppet version are you using? This looks similar to #6856 which was fixed 
in 2.6.8. It could also be #4932.

 -- 
 John Kennedy
 
  -- 
  You received this message because you are subscribed to the Google Groups 
 Puppet Users group.
  To post to this group, send email to puppet-users@googlegroups.com 
 (mailto:puppet-users@googlegroups.com).
  To unsubscribe from this group, send email to 
 puppet-users+unsubscr...@googlegroups.com 
 (mailto:puppet-users+unsubscr...@googlegroups.com).
  For more options, visit this group at 
 http://groups.google.com/group/puppet-users?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] User's Home Folder is not Being created but the user is there.

2011-05-31 Thread Nick Lewis

On Tuesday, May 31, 2011 at 10:50 AM, vella1tj wrote:

 user {'trevor' :
  uid = 500,
  groups = 'root',
  comment = 'this user was created by Mr. Puppet',
  ensure = present,
  home = '/home/trevor',
  shell = 'bin/bash',
 }
 
 I created this to create a User using puppet this was an exercise
 given by one of my co-workers to get me to learn puppet quicker, I
 created this and once it's applied it works I can login as the user
 but the home directory is not created, This is using CentOS in
 VMware Fusion.
 
You'll need to set the 'managehome' parameter to tell Puppet to actually create 
the home directory.
 -- 
 You received this message because you are subscribed to the Google Groups 
 Puppet Users group.
 To post to this group, send email to puppet-users@googlegroups.com 
 (mailto:puppet-users@googlegroups.com).
 To unsubscribe from this group, send email to 
 puppet-users+unsubscr...@googlegroups.com 
 (mailto:puppet-users+unsubscr...@googlegroups.com).
 For more options, visit this group at 
 http://groups.google.com/group/puppet-users?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] ANNOUNCE: Puppet Dashboard 1.1.0 - Release Candidate 1 available!

2011-03-16 Thread Nick Lewis
This release addresses a large number of issues and adds lots of new
functionality, including:

Inventory Service Lookup

- The node view page will now retrieve and display the node's facts from the
inventory service.
- There is a Custom Query page which will search the inventory service for
nodes meeting particular conditions.

Preliminary documentation for this feature will be available at:
https://github.com/puppetlabs/puppet-docs/blob/master/source/guides/inventory_service.markdown

Finalized documentation will be available at release time on the main
documentation site:
http://docs.puppetlabs.com

Settings

- Many settings may now be specified in config/settings.yml. Copy the
config/settings.yml.example (which provides fallback defaults) to get started.
- Changing a setting will currently require a server restart to take effect.

Inspect Report Handling

- Dashboard can now consume and display inspect reports.

Filebucket integration

- Dashboard can now display file contents and diffs from a specified
Puppet filebucket.

Lots of UI and speed improvements

Better support for reports

- Now supports 2.6 reports and inspect reports

Preliminary support for user-made plugins

Improved Class/Group/Parameter dependency reporting and handling

Log rotation

 This release is available for download at:
  http://puppetlabs.com/downloads/dashboard/puppet-dashboard-1.1.0rc1.tar.gz

 See the Verifying Puppet Download section at:
  
http://projects.puppetlabs.com/projects/puppet/wiki/Downloading_Puppet#Verifying+Puppet+Downloads

 Please report feedback via the Puppet Labs Redmine site, using an
 affected version of 1.1.0rc1:
  http://projects.puppetlabs.com/projects/dashboard/

v1.1.0rc1
=
1fcfc01 (#6736) Provide Mutex, avoid an error.
95f97fb maint: Move inventory section lower on the node page
8629962 (#4403) Do timezone arithmetic outside of the DB in the Status model
614655c Remove dead code from Status model
849f2de Validate the user supplied daily_run_history_length
118962b (#6656) Inventory service is no longer experimental.
90e0624 (#6601) Inventory search uses the new inventory URL
fb55499 (#5711) Change license from GPLv3 to GPLv2
68b335e (#5234) Source of silk icons attributed, per author's license
d3d1528 Maint: Moved logic for identifying inspect reports into a callback.
c2fe255 Maint: removed bogus comments from _report.html.haml
81b8a04 Maint: Moved elements of the report show view into callbacks.
2b91838 Maint: Moved elements of the node show view into callbacks.
cc95431 Maint: Forbid uninstalled plugins from adding themselves to hooks.
169d275 Maint: Add plug-in install and uninstall rake tasks
d4d0b00 Maint: removed db/schema.rb
5f6614d Maint: Removed some private methods in the report model that
are part of baseline functionality.
db663a5 Maint: remove code that belongs in the baseline module.
5be1f0f maint: Added log dir to version control
93857f0 Maint: Add puppet plugins to .gitignore
1197e8a Bug fix: renamed each_hook and find_first_hook to *_callback
cbfde3d Remove some forgotten baseline code
2b4f9eb Add some basic hooks for use by future Dashboard plug-ins.
c9ff13e Add a registry for creating hooks and callbacks.
a40e6c9 Oops: Remove report baseline functionality
fd7f799 Rename baseline-diff-report CSS classes and IDs to be expandable-list
161e0da (#6090) Improved auto-selection of specific baseline.
035aa17 (#6072) Moved baseline inspection link underneath Recent Inspections
613a465 (#6095) Render proper error messages when diffing against a
baseline that can't be found
ea2368f (#6069) Fixed unique ids in the report group diff view.
3426763 maint: Use new factory_girl syntax to improve a test
1862966 maint: Refresh the vendored gem specifications
79a23c9 maint: replace object_daddy with factory_girl
b6b17e5 maint: Fix a case where the alphabetically first baselines may
not appear
f5d0bbe Maint: Moved colorbox.css and image files to be compatible
with production environment
989fb4a (#5869) Extract baseline selector into a partial
f666476 (#5869) Add new baseline selector to the node group page
5e0d448 (#5869) Rework the baseline selector for report show page
6fef8e7 (#5869) Add a /reports/baselines action to retrieve baselines
5e7f8cb maint: add a view test to motivate reverting diff expand_all
f5ac259 maint: Added combobox widget, to replace autocomplete plugin
9da052f maint: upgraded jquery-ui to 1.8.9
80c182c Maint: Add JQuery UI animation to expand/collapse widgets.
efefe3d (#6024) Show filebucket popup on diff screen, too
1c3c134 (#6024) Click md5s to popup file bucket contents on reports
f1bed8f maint: Privatize string helper
2112ba1 (#5865) Further improvements and bug fixes to the search
inspect reports page
d0cda86 Revert Maint: Removed show_expand_all_link variable
57589a4 (#5785) Removed some redundancy from report view.
353998f Maint: Removed show_expand_all_link variable
de8ee46 (#5867) Add ability to diff a node_group against a single baseline
7ca4d59 (#5867) Only 

[Puppet Users] RFC: Database-backed inventory service plan

2011-02-23 Thread Nick Lewis
Our current plan for the inventory service is to provide active_record
termini for the facts and inventory indirections. This is to support
fast look-up of facts, and search of nodes based on their facts. However,
there are already tables for facts, used for storeconfigs, along with an
active_record terminus for facts. We want to avoid unnecessarily duplicating
this behavior, by reusing the existing tables and terminus. This would
result in the same fact data being used by both the inventory service and
storeconfigs.

The only potential concern we can see with this is users wanting different
fact expiration policies for inventory service and storeconfigs. Given the
usage scenarios for storeconfigs that we are aware of, this seems unlikely
(it sounds like storeconfig fact data is mostly being used as a stand-in for
an inventory service). This proposal would have no other effect on
storeconfigs.

Please share any other comments or concerns you may have related to this
proposal, particularly if it would interfere with your current use of
storeconfigs. Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.