information on the specifics of the
release can be found in the official release notes:
https://docs.puppetlabs.com/puppetdb/3.2/release_notes.html
Contributors
---
Andrew Roetker, Ken Barber, Nick Fagerlund, Rob Browing, Russell Mull,
Ryan Senior, Tim Skirvin, Wayne Warren and Wyatt Alt
> I've been trying to use the catalogs stored in the PuppetDB to make the
> puppet-catalog-diff tool faster (instead of recompiling).
>
> However, the catalogs are munged before being stored in the PuppetDB, so
> this does not work.
>
> Is there a way to transform PuppetDB catalogs back to standard
that if you have any questions or
trouble migrating to PostgreSQL, the puppet-users mailing list and the
#puppet IRC channel are watched by a number of us in the PuppetDB team
(not to mention, by other avid community users who are also helpful),
so we can help where necessary with any problems.
Reg
> While we are leaning toward a config-file driven approach, we would be
> interested in hearing of any specific use cases you may know of where this
> may be insufficient. We would specifically be interested in any use cases
> which suggest that some affordance in the design should be made to all
> can any one help me to work in puppet work..
>
> i am started working in puppetdb, i need to connect to puppetdb and access
> some fields.
>
> i have downloaded the puppet source code from the github, they mentioned
> some .pem files need to run the junit.
I'm not sure where we mention junit :-)
> Ah ok, yeah I was talking mainly about module testing. But yeah, the general
> gist is if you want to run beaker tests for free and in a public CI, Werker
> + Docker seems to work :)
Yeah absolutely, it's an exciting way of doing tests really. There are
some exceptions of course - for example if
> I know PuppetLabs themselves use their own Jenkins server with Virtualbox on
> it, but for those who want to get beaker stuff up publicly easily and (for
> now, free) wercker seems to work pretty well! :)
Actually, we generally use VSphere now (I presume you mean just module
testing), just FYI.
> Yes, I'm using 'lein deps' to download the deps locally, which is how
> I also handle it for PuppetDB also.
>
> The issue I'm seeing is that when i run 'lein uberjar' the resulting
> jar doesn't contain all the classes need to run, jetty and
> trapperkeeper are not included. When I attempt to sta
> The only problem I see with deprecating active_record storeconfigs is:
> How are you going to use exported resources in puppet apply
> environments, without having to do all the SSL dance?
>
> https://groups.google.com/forum/#!msg/puppet-users/L4CAHh3eYag/To9nHlAvA34J
Well, we could simplify the
> I'm wondering if anyone has a relatively straightforward way to allow a
> group of Puppet Masters to access a shared data table in PuppetDB to which
> they can read and write named JSON objects.
>
> No other hosts should be able to access the data.
Nothing like this has been provided formally to
> Instead of continuing on the old thread "A question about numbers and
> representation", I decided to open a new thread avout BigDecimal to see if
> we can come to closure on that separately.
>
> Digging a bit more into Ruby, and how it handles floating point reveals that
> there is in fact no au
> Thanks, Ken. Could you devote a few words to how PuppetDB chooses which of
> those alternative columns to use for any particular value, and how it
> afterward tracks which one has been used?
So PuppetDB, in particular fact-contents, and the way it stores leaf
values makes a decision using a ver
> I would think if we had transport encoding issues (like mspack not
> supporting a larger type natively) we could decode before we store I
> guess. Thats an alternative. It means traversing the tree to find
> these cases and modifying them on the float perhaps. Things like
> zipper in clojure make
>> So right now, we have alternating native postgresql columns for the
>> bare types: text, biginteger, boolean, double precision. This provides
>> us with the ability to use the most optimal index for the type, and of
>> course avoid storing any more then we need to. As I mentioned at the
>> top o
>> > 2) Why would allowing one or both of the Bigs prevent Number from being
>> > allowed as a serializable type?
>> >
>> Not sure I said that. The problem is that if something is potentially
>> Big... then a database must be prepared to deal with it and it has a
>> high cost.
>
>
>
> Every Puppet
> TL;DR - I want to specify the max values of integers and floats in the
> puppet language for a number of reasons. Skip the background part
> to get to "Questions and Proposal" if you are already familiar with
> serialization formats, and issues regarding numeric representation.
TL;DR: from a Pup
> As a bit of a followup to this discussion I've created this pull request:
> https://github.com/puppetlabs/facter/pull/777
>
> When fixing support for structured facts I noticed that the current data
> type bugginess in facter makes it almost unusable with PuppetDB 2.2. As you
> can't use the < or
more detailed information and upgrade advice, consult the detailed
release notes here:
https://docs.puppetlabs.com/puppetdb/2.2/release_notes.html
Contributors
Brian Cain, Eric Timmerman, Justin Holguin, Ken Barber, Nick
Fagerlund, Ryan Senior and Wyatt Alt.
Changelog
-
Brian
>> Essentially what I want to be able to do is declare the intent of a
>> relation between any two resources or 'bags' of resource types in the
>> catalog and have that relation taken into account when the resource is
>> realised, without causing realisation at that point.
>>
>
> Completely agree -
y queries. Does external access to
> puppetdb need to be granted somewhere? I saw there is a whitelist didn't
> have much luck.
>
> [root@master pe-httpd]# curl -X GET http://localhost:8080/v3/version
> {
> "version" : "1.5.1-pe"
>
> }
>
>
elps me understand the full results.
ken.
On Wed, Jul 16, 2014 at 3:21 PM, Alex Wacker wrote:
> Also not seeing anything related to the failure to connect in the logs
>
>
> On Wednesday, July 16, 2014 9:51:48 AM UTC-4, Ken Barber wrote:
>>
>> > I am able to get a
> I am able to get a response out of the standard puppet API (not puppetdb)
> via curl however puppetdb only gives such:
>
> While something such as the puppet master API (while not exactly the
> puppetdb api) will at least respond to me
I can't help but notice you are connecting from a windows bo
g conversion code for Schema 0.2.1
Much of the code for converting user provided config to the internal
types (i.e.
"10" to Joda time 10 Seconds etc) is no longer necessary with the new
coerce features of Schema. This commit switches to the new version and
makes the
Hello,
For those who don't know me, I'm a developer for Puppet Labs primarily
on the PuppetDB platform. I'm seeking early feedback from you all
around our technical design ideas around adding Structured Fact
storage & querying to PuppetDB.
The last time we floated our ideas around environment sup
at 9:16 PM, Ken Barber wrote:
> ** Final Release **
>
> PuppetDB 2.0.0 final - May 6, 2014.
>
> PuppetDB 2.0.0 Downloads
>
>
> Available in native package format in the release repositories at:
> http://yum.puppetlabs.com and http://apt.puppetlabs
the docs site
* (PDB-602) Updated acceptance tests to use a proper release of leiningen
* (DOCUMENT-6) Update config page for PuppetDB module's improved
settings behavior
PuppetDB 2.0.0 Contributors
---
Alice Nodelman, Aric Gardner, Chris Price, Daniele Sluijters, D
>>> So for now our status means trying to do this in the language without
>>> an actual change to Puppet is becoming hard. This is entirely
>>> possible, but we'll have to ship with environment support without
>>> constraint capability today most probably.
>>>
>>> The only other 'quick and dirty' o
>> So for now our status means trying to do this in the language without
>> an actual change to Puppet is becoming hard. This is entirely
>> possible, but we'll have to ship with environment support without
>> constraint capability today most probably.
>>
>> The only other 'quick and dirty' option
I can think of is to do this
back in the terminus configuration again, which some people are
clearly not fans of.
Any other ideas from those watching at home?
ken.
On Fri, Apr 11, 2014 at 12:44 AM, Ken Barber wrote:
>> The 3x/current parser is very picky what it allows in the query. The on
> The 3x/current parser is very picky what it allows in the query. The only
> chance of doing someting special, is to reserve some particular expressions
> that would otherwise be interpreted as a regular query - i.e. checking
> equality on a virtual parameter name or something like that. This is b
>> Hmm. Lots of things are possible, just need to avoid collision with
>> the parameter naming.
>>
>> Myresource <<| .environment == $::environment |>>#
>> dalen's suggestion
>
>
> Nah, that goes down the path of using different syntax and even terminals in
> queries.
>
>
>> Myresource
vironment? |>> #
short-hand for matching the same environment
ken.
On Wed, Apr 2, 2014 at 4:14 PM, Erik Dalén wrote:
>
> On 2 April 2014 16:40, Ken Barber wrote:
>>
>> > I quite like the idea of allowing people to restrict collection based
&g
> I quite like the idea of allowing people to restrict collection based
> on environment. That requires a slight tweak to the puppetdb terminus
> code, but I don't think it'll be too bad. Erik is correct, though,
> that we can't really use "environment" as the search term there
> because there are
>> Isn't that the opposite of what you've been asking for as the default,
>> do you mean "environment = $::environment" here or have you changed
>> your mind about the default?
>>
>
> I think the default when you leave out a parameter from a query should be to
> not match on that parameter, like it
>> >> This seems a bit backwards to me, for all other parts of the query you
>> >> just leave it out if you don't want to match on it. There's no need for
>> >> a
>> >> explicit tags=='*' if you want to match on all tags for example. So I
>> >> don't
>> >> see why environment matching would work th
> I... kinda like that suggestion. I would keep current behaviour intact, so
> collection would work 'as expected though weirdly' and not break current
> manifests. People who are up to date on this can explicitly select an
> environment to collect.
>
> I also think that this approach works better
>> Upon reflection, I think it would be wise to control which resources are
>> collected strictly via search expressions. I disfavor a configuration
>> setting affecting that, because if there were one then it would be likely
>> that different modules would be developed with different assumptions
> Coarser grained too, perhaps? That is, for the case where puppetdb is
> configured with collection_environments = 'same', does it not make sense to
> support, say,
>
> Nagios_host<<| environment == * |>>
>
> to collect resources from all environments? Or maybe the smoothest path
> would be
>> > I'm all for only collection from the environment you're from but there
>> > needs
>> > to be a way to override this. No matter the environment I still want all
>> > my
>> > machines monitored by my Nagios instance which happens to be running on
>> > the
>> > production environment.
>>
>> I con
>> I'm all for only collection from the environment you're from but there needs
>> to be a way to override this. No matter the environment I still want all my
>> machines monitored by my Nagios instance which happens to be running on the
>> production environment.
>
> I concur with the sentiments a
>> This is the world I see, it won't affect everyone though and
>> theoretically with 1 hour check-ins it will be solved next run. My
>> fear is more around those that don't run puppet as often.
>
> Oh. I was under the impression most people run Puppet at least every half
> hour, if not even more o
> I'm wondering if it will be possible to disable this behaviour on resource
> collections in the terminus?
> For us puppet environments are mapped to git branches, and actual
> environments (like testing and prod) have different Puppet CAs and PuppetDBs
> (it is really best to not mix them anyway)
> The one issue I see with that approach is what happens when your monitoring
> system, *choke* nagios *choke* is dependant on said exported resources.
> Wiping the database clean isn't really an option in that case or you'd need
> to temporarily disable reloading your monitoring until everyone's c
o keep test stuff out of production only, not
> anything more complicated.
>
>
> On Fri, Mar 28, 2014 at 6:52 PM, Ken Barber wrote:
>>
>> > We've been simply tagging resources with env_${environment} both on
>> > export
>> > and collect.
>>
&g
> We've been simply tagging resources with env_${environment} both on export
> and collect.
Which sounds like a reasonable work-around. In this case only option 2
would guarantee uninterrupted service for you correct?
ken.
--
You received this message because you are subscribed to the Google Gr
Thanks for your perspective John, much appreciated.
On Fri, Mar 28, 2014 at 7:15 PM, John Bollinger
wrote:
>
>
> On Friday, March 28, 2014 12:46:41 PM UTC-5, Ken Barber wrote:
>>
>> Hey all,
>>
>> TL;DR: We're adding support to environments to PuppetDB b
Hey all,
TL;DR: We're adding support to environments to PuppetDB but have a
small migration hassle we wanted some community opinion on. If you're
interested in PuppetDB and environments read on.
So we're looking at adding first class support to PuppetDB for
environments. In the past we would happ
xamples to use v3 API
* Added examples to documentation for latest-report? and file
PuppetDB 1.6.3 Contributors
---
Aric Gardner, Chris Price, Deepak Giridharagopal, Ken Barber, Matthaus
Owens, Moses Mendoza, Rob Braden, Ryan McKern, Ryan Senior
PuppetDB 1.6.
in (typically represented as a sequence of certs in a
single file) for trust.
PuppetDB 1.6.0 Contributors
---
Chris Price, Deepak Giridharagopal, James Sweeney, Just Holguin, Ken
Barber, Kushal Pisavadia, Matthaus Owens, Melissa Stone, Moses
Mendoza, Nick Fagerlund,
s allows users to use a
certificate chain (typically represented as a sequence of certs in a
single file) for trust.
PuppetDB 1.6.0 Contributors
---
Chris Price, Deepak Giridharagopal, James Sweeney, Just Holguin, Ken
Barber, Kushal Pisavadia, Matthaus Owens, Melissa Stone, Moses
Mendoza, Ni
> I read through the comprehensive release notes trying to find if there are
> any specific changes/additions that affects the API that I need to also
> incorporate in the Java client (new endpoints or functionality). Lot's of
> great additions for sure but am I correct in assuming that the actual
uststore. This allows users to use a
certificate chain (typically represented as a sequence of certs in a
single file) for trust.
PuppetDB 1.6.0 Contributors
---
Chris Price, Deepak Giridharagopal, James Sweeney, Just Holguin, Ken
Barber, Kushal Pisavadia, Matthaus Owens, Mel
> I've found that when putting stuff in modules that use some shared code like
> puppetdbquery, you either have to do a pluginsync on the master first, or do
> some hacks with the ruby load path to load the code directly out of the
> modulepath. That could be a bit annoying for stuff like naginator
inst both
PostgreSQL and HSQLDB. We also run our full acceptance test suite on
incoming pull requests.
PuppetDB 1.6.0 Contributors
---
Chris Price, Deepak Giridharagopal, James Sweeney, Just Holguin, Ken
Barber, Kushal Pisavadia, Matthaus Owens, Melissa Stone, Moses
Mendoz
soft_write_failure to puppetdb.conf (Garrett Honeycutt)
* Add switch to configure database SSL connection (Stefan Dietrich)
* (GH-91) Update to use rspec-system-puppet 2.x (Ken Barber)
* (GH-93) Switch to using puppetlabs-postgresql 3.x (Ken Barber)
* Fix copyright and project notice (Ken Barber)
* Adjust
-
* (GH-73) Switch to puppetlabs/inifile from cprice/inifile (Ken Barber)
* Make database_password an optional parameter (Nick Lewis)
* add archlinux support (Niels Abspoel)
* Added puppetdb service control (Akos Hencz)
--
You received this message because you are subscribed to the Google Groups
Hmm ... thanks for letting us know.
My humblest apologies, we'll get the updates out for those distro
releases ASAP. As Daniele mentioned, on apt.puppetlabs.com it seems
only lucid is available now, the others releases are missing.
ken.
On Tue, Oct 1, 2013 at 9:56 AM, Daniele Sluijters
wrote:
>
board on Trello:
http://links.puppetlabs.com/puppetdb-trello
## PuppetDB 1.4.0 Release Notes ##
Notable features and improvements:
* (#21732) Allow SSL configuration based on Puppet PEM files (Chris
Price & Ken Ba
I've done it, but I worked for Alfresco in their IT department at the
time. You should try contacting Alfresco themselves - they have
modules internally for managing Alfresco that they might be able to
share. In an ideal world if they can host them on the forge, even
better.
ken.
On Wed, Jun 19,
, including tests and
documentation (Christian Berg)
* the new settings report_ttl, node_ttl and node_purge_ttl were added
but they are not working, this fixes it (fsalum)
* Also fix gc_interval (Ken Barber)
* Support for remote puppetdb (Filip Hrbek)
* Added support for Java VM options (Karel B
ing different. Don't know how other mail
clients are working though :-).
ken.
On Fri, May 10, 2013 at 2:20 PM, Ken Barber wrote:
> Yeah, I don't think this was intentional Allan - looks like google groups
> threading is broken somehow in this case and mixing these two threads
>
Yeah, I don't think this was intentional Allan - looks like google groups
threading is broken somehow in this case and mixing these two threads
together. In my email client however, they are separate threads.
ken.
On Fri, May 10, 2013 at 1:59 PM, Allan Yung wrote:
> I'm glad you were able to f
r Ubuntu (Kamil Szymanski)
* Allow to set connection for noew role (Kamil Szymanski)
* Fix pg_hba_rule for postgres local access (Kamil Szymanski)
* Fix versions for travis-ci (Ken Barber)
* Add replication support (Jordi Boggiano)
* Cleaned up and added unit tests (Ken Barber)
* Generalization to pr
A new release of the puppetlabs/puppetdb module is now available on the Forge:
http://forge.puppetlabs.com/puppetlabs/puppetdb/1.2.1
This is a bugfix releases that solves the PuppetDB startup exception:
java.lang.AssertionError: Assert failed: (string? s)
This was due to the default `node-ttl`
A new release of the puppetlabs/puppetdb module is now available on the Forge:
http://forge.puppetlabs.com/puppetlabs/puppetdb/1.2.0
This release is primarily about providing full configuration file
support in the module for PuppetDB 1.2.0. (The alignment of version is
a coincidence I assure you
A new release of the puppetlabs/puppetdb module is now available on the Forge:
http://forge.puppetlabs.com/puppetlabs/puppetdb/1.1.5
This is a minor bug-release.
Changelog
2013-02-13 - Karel Brezina
* Fix database creation so database_username, database_password and
database_name are c
hereby the
`include` directive in `postgresql.conf` was not compatible. As a
work-around we have added checks in our code to make sure systems
running PostgreSQL 8.1 or older do not have this directive added.
Detailed Changes
2013-01-21 - Ken Barber
* Only install `include` directive and inc
So for anyone running RHEL or Centos 5, we've found a bug - but
already have a fix for you all in master:
https://github.com/puppetlabs/puppet-postgresql/issues/130
We'll do a follow up minor release soon to cover this. Thanks!
ken.
On Wed, Feb 20, 2013 at 6:02 PM, Ken Barber wrot
The 'catalogue' method is the running catalogue within rspec-puppet.
It would be pretty huge and hard to follow as its obviously one very
large ruby object - but a pretty_inspect (would need require 'pp') or
inspect might be a good place to start analyzing it.
On Thu, Feb 21, 2013 at 5:17 PM, Maar
ADME.md to conform with best practices template
2013-01-09 - Adrien Thebo
* Update postgresql_default_version to 9.1 for Debian 7.0
2013-01-28 - Karel Brezina
* Add support for tablespaces
2013-01-16 - Chris Price & Karel Brezina
* Provide support for an 'include' config file
Try a define resource that wraps an exec. While this is nowhere near
complete, something in the order of the example below might be a good
place to start.
define mycustompackage {
exec { "mycustompackages-${name}":
command => "rpm -ivh --dbpath /yourdbpath/ ${name}"
unles
This is certainly how we handle the firewall/iptables case, using
properties and a late flush:
https://github.com/puppetlabs/puppetlabs-firewall/blob/master/lib/puppet/provider/firewall/iptables.rb#L94-L102
On Tue, Feb 12, 2013 at 4:02 PM, Gavin Williams wrote:
> Any opinions out there???
>
> Ch
About bloody time :-).
On Tue, Feb 5, 2013 at 1:09 AM, Jeff McCune wrote:
> Hi everyone,
>
> I'd like to welcome Adrien Thebo to the platform team. His first day is
> today and he's diving right into reviewing community pull requests. Adrien
> will be working with the platform team and him and
(puppet-users is more appropriate for this question btw, next time use
that mailing list instead of this one).
'Connection refused' is a generic TCP message telling us that the port
is not open on the host you specified. This usually means its not
running, or not listening on the host port combina
t; Thanks for the new release.
>
>
>
> 2013/1/25 Ken Barber
>>
>> For those trying PuppetDB 1.1.0 in their various labs today, just a
>> warning to make sure you upgrade the installed version of
>> puppetdb-terminus as well as puppetdb to the same 1.1.0 version
.com/puppetdb-trello
>
> PuppetDB 1.1.0 Release Notes
> ==
> Many thanks to the following people who contributed patches to this release:
>
> Chris Price
> Deepak Giridharagopal
> Jeff Blaine
> Ken Barber
> Kushal Pisavadia
> Matthaus Litteken
&
Hey Eric,
> ### YARD API Documentation
>
> To go along with the improved usability of Puppet as a library, we've added
> [YARD documentation](http://yardoc.org) through selected parts of the
> codebase. YARD generates browsable code documentation based on in-line
> comments. This is a first pa
> You do touch on something that I've been wondering about. We somehow have
> ended
> up with a 3.x branch, which would mean that master is really the 4.x
> branch. I don't see
> any point on us trying to work on 4.x when there isn't even a 3.x
> release yet. Would anyone
> be opposed to us gettin
>> If you're going to bundle it, I'd rather us not fork it. Keep it in
>> its own directory, and allow distros to rm -rf it, and add mini-tar
>> as
>> a dependency.
>>
>> http://fedoraproject.org/wiki/Packaging:No_Bundled_Libraries pretty
>> much sums up my thoughts on bundled libraries in general
Hi puppet-dev,
TL;DR - I'm thinking about shipping minitar with puppet to make module
install/build work consistently (and to ease development) (see:
http://projects.puppetlabs.com/issues/15841) and wanted some feedback
--
So in case you don't already know - the puppet module face utilises
s
+3 your my new personal hero Brice :-).
On Tue, Jul 3, 2012 at 4:10 PM, James Turnbull wrote:
> Nigel Kersten wrote:
>>> Those patches are not ready for general consumption and upstream merging
>>> (that's why I didn't publish a pull request), but I'd appreciate any
>>> feedback and tests on othe
> The best part about having the documentation embedded in the types and
> providers is that it's expressed in a Turing-complete language, so we have
> to actually LOAD THE CODE to extract it.
Side issue but related: This also affects building packages for users
for the forge if we want least effo
Hey all,
Sorry - last newbie question for the night.
I wanted to see what peoples thoughts were around when an acceptance
test is required, there was a debate on a ticket recently and two
people told me it wasn't necessary to add acceptance tests for a
particular case - and my mother told me when
Hi all,
So I'm reworking part of the forge search error handling to deal with
throwing meaningful SSL messages (see my previous message). My goal so
you can step back and tell me my approach is completely wrong: I want
to capture SSL verification failures so I can tell a user that the
OpenSSL CA b
hanks a lot Josh & Daniel for
your help.
ken.
On Mon, Jun 25, 2012 at 6:14 PM, Ken Barber wrote:
> Booyah ... and this now works on Debian:
>
>
>
> require 'net/https'
>
> cert_store = OpenSSL::X509::Store.new
> cert_store.set_default_path
SL::SSL::VERIFY_PEER
proxy.cert_store = cert_store
response = nil
proxy.start do |http|
request = Net::HTTP::Get.new('/')
response = http.request(request)
end
puts response.body
Let me try this on all my various VM's now and confirm it.
ken.
On Mon, Jun 25, 2012 at 6:10 PM, Ke
this out, it breaks
ctx.cert_store = cert_store
s = TCPSocket.open('forge.puppetlabs.com', '443')
s = OpenSSL::SSL::SSLSocket.new(s, ctx)
s.sync_close = true
s.connect
Before I had to use ctx.ca_path = '/etc/ssl/certs'. I think we are on
to something here.
ken.
On Mon, Jun 25
e = 'whatever'
response = nil
proxy.start do |http|
request = Net::HTTP::Get.new('/')
response = http.request(request)
end
puts response.body
ken.
On Mon, Jun 25, 2012 at 5:58 PM, Daniel Pittman wrote:
> On Mon, Jun 25, 2012 at 9:54 AM, Josh Cooper wrote:
>> On Mon,
erify ... which is quite sad really, but I imagine this is
due to the hassles involved in making this work properly. I don't want
to take this route.
ken.
On Mon, Jun 25, 2012 at 1:11 PM, Ken Barber wrote:
> (responding to puppet-dev)
>
>>>> I've managed t
(responding to puppet-dev)
>>> I've managed to solve it on Linux by specifying:
>>> https_object.ca_path = '/etc/ssl/certs'
>
> You managed to work around your broken build, I think.
You mean Debian 6's broken build - Lol ... found this using the system
ruby 1.8 from Debian, latest version :-).
I just had an 'aha' moment when trying to make the PMT tool interact
with the forge using SSL.
The problem is with Ruby & OpenSSL and its need for a CA path or file
when I want to use VERIFY_PEER as a mechanism. Since
forge.puppetlabs.com uses a publicly signed certificate, I need to
provide the p
>> I mean a cron job with:
>>
>> facter -y > /var/lib/facter/cache.yaml
>
>
> Based on past experiences I am always a little unnerved by the idea of using
> a file on disk as "API". It is revealing an implementation detail of your
> program that makes it much harder to make changes in the future w
>> can you give some more detail on how the cache will be used? If a fact
>> is found on disk via the rb file and there's nothing in the cache will
>> it then simply run the slow way? and update the cache?
>>
> Facter itself will not use the cache. If you have an application that needs
> facts and
Yeah - I think it was based on your advice KW. Stephen is awesome when
it comes to this kind of stuff.
On Fri, May 18, 2012 at 11:29 AM, Krzysztof Wilczynski
wrote:
> Hi,
>
>
> On Friday, May 18, 2012 10:30:18 AM UTC+1, Ken Barber wrote:
>>
>> Did you see the re-implem
Did you see the re-implementation of 'which' that Stephen Schulte is
working on here?
https://github.com/puppetlabs/facter/pull/189
Its taken from Puppet to a certain extent I believe, with some
backwards compatible handling that we thought we might need as well to
handle existing custom fact ass
into the next.
On Mon, May 14, 2012 at 6:13 PM, Ken Barber wrote:
> Hrm. You are right, with that in mind I'll work to that end and see
> what I come up with and talk some more on this thread: since its going
> to affect all rspec-puppet tested projects.
>
> On Mon, May 14, 201
Hrm. You are right, with that in mind I'll work to that end and see
what I come up with and talk some more on this thread: since its going
to affect all rspec-puppet tested projects.
On Mon, May 14, 2012 at 5:41 PM, Daniel Pittman wrote:
> On Mon, May 14, 2012 at 6:19 AM, Ken Barbe
So I took on this bug this morning, which I'd been meaning to work on
for quite some time:
http://projects.puppetlabs.com/issues/11156
But hit some snags when it came to testing on master. Here is my patch:
https://github.com/kbarber/puppetlabs-ntp/commit/e96894fd8c3a308f1a68d4a5466a2795c0eba6ad
41 PM, Philip Brown wrote:
>
>
> On Thursday, May 10, 2012 12:31:15 PM UTC-7, Ken Barber wrote:
>>
>> Its managed like code, so if you go to:
>>
>> https://github.com/puppetlabs/puppet-docs
>>
>> You can see the README file with the docs on how to build
Its managed like code, so if you go to:
https://github.com/puppetlabs/puppet-docs
You can see the README file with the docs on how to build it, and view it.
ken.
On Thu, May 10, 2012 at 8:19 PM, Philip Brown wrote:
>
>
> On Thursday, May 10, 2012 11:56:15 AM UTC-7, Ken Barber wrote:
1 - 100 of 173 matches
Mail list logo