The bare minimum is to not 'ensure => latest' on production systems across 
against a repo you don't maintain. I've seen two patterns to implement this - 
assuming you are managing puppet components via puppet itself, obviously:

1. use the upstream repos but 'ensure' to known-good versions. Test new 
upstream releases on canary nodes and roll them out in a controlled deployment
2. use ensure => latest but control the *repos* that hosts are pointed to. I 
did this at a previous job because there ended up being a number of upstream 
repositories that we wanted to mirror for bandwidth and availability reasons, 
and it just became a question of pointing hosts at 'canary' vs 'production' 
repositories for all packaged software to test upgrades.

Clearly the second one requires a little more setup. I took a swing at 
documenting the first one on the (drafted but now outdated) collections docs 
page, would welcome any wordsmithing or additional patterns you'd suggest: 
https://docs.puppet.com/puppet/latest/puppet_collections.html

--eric0


> On Dec 5, 2016, at 1:39 PM, Rob Nelson <[email protected]> wrote:
> 
> Eric, what IS the rough outline on how to avoid incompatible updates with a 
> consolidated repo? I'm particularly interested in how it would work with 
> puppetlabs/puppet_agent (since `latest` would suddenly have a much different 
> meaning).
> 
> 
> Rob Nelson
> [email protected] <mailto:[email protected]>
> On Mon, Dec 5, 2016 at 4:25 PM, Eric Sorenson <[email protected] 
> <mailto:[email protected]>> wrote:
> Hi all. tl;dr: We are proposing moving the open source package repositories 
> back to a single repository for Puppet-owned projects and their dependencies. 
> This represents a shift from our stated plan to release major-version 
> releases that might contain backwards incompatibilities into their own Puppet 
> Collection repositories, but as a result it will be less confusing to use the 
> packages and easier to stay current.
> 
> Long version: When we released Puppet 3.0 in 2013, backward incompatibilities 
> between it and Puppet 2.7 broke a number of sites who had configured their 
> provisioning or package updates to install the latest version of Puppet from 
> our repositories. In order to prevent similar breakage when we released 
> Puppet 4 in April 2015, we introduced it into a new repository called Puppet 
> Collection 1 (PC1), so users had to opt in rather than opt out. The idea was 
> that future backward-incompatible updates would trigger new Puppet 
> Collections, which would also be opt-in, so that a user could stay on PC1 and 
> only move to PC2 when they were ready (background reading: 
> https://puppet.com/blog/welcome-to-puppet-collections 
> <https://puppet.com/blog/welcome-to-puppet-collections> ).  In practice, the 
> switching costs to get everyone onto a new repository seemed really high and 
> for the most part the impact of releasing into the existing collection was 
> low, so instead we either shipped releases like PuppetDB 4.0 into PC1 or 
> deferred shipping versions with big changes, such when we rolled back from 
> Ruby 2.3 to 2.1 for puppet-agent-1.7.0.
> 
> We've been exploring our options to balance between the following criteria:
> 
> - avoid breaking sites, to not repeat the Puppet 2 to 3 pain
> - provide a set of component packages that are known to work with each other, 
> and provide a basis for Puppet Enterprise platform releases
> - encourage rapid adoption of new releases by the open source community
> - provide commercial differentiation on support lifecycle, similar to the 
> RHEL / Fedora model
> 
> We talked through a number of options in pretty exhaustive detail and have 
> tentatively settled on this as the best – or maybe "least bad" – course of 
> action:
> 
> - make a release package with a new name (probably "puppet-release"), 
> eliminating the public face of "Collections"
> - move the existing repository directory structure over to a top-level 
> "puppet" repo, leaving links in place for current PC1 users to avoid breaking 
> them.
> - publish and promote the plan (probably including re-visiting that blog post 
> above and making a new one to advertise what's happening), including 
> instructions on how to avoid incompatible updates if you don't want them, and 
> updating 
> https://docs.puppet.com/puppet/latest/reference/puppet_collections.html#puppet-collection-contents
>  
> <https://docs.puppet.com/puppet/latest/reference/puppet_collections.html#puppet-collection-contents>
> - continue publishing any and all open-source releases to the "puppet" repo, 
> including major-version releases.
> 
> The patching/update policy will remain as it is today, where only the latest 
> series receives patches. For instance, once Puppet 4.9.0 is out, there will 
> be no more 4.8.x releases. The package repositories which contain Long Term 
> Support Puppet Enterprise point releases will continue to be private, but the 
> branches/tags of the components that comprise these point releases will 
> remain public, so people could rebuild them if they wanted to.
> 
> Speaking of community upstream, we want to enable builds of Puppet that 
> behave reliably, stay current with our bugfixes and release cadence, and run 
> on OSes that Puppet Inc. doesn't commercially support. We've been working to 
> enable outside folks to rebuild and distribute our software and are going to 
> continue to focus energy on this. As a few examples, we are:
> - working to get Puppet 4.x and Facter 3 built as standalone packages for 
> Solaris
> - investigating the OS-native build toolchain for OSes with current compilers 
> like Ubuntu Yakkety and Fedora 25 (to avoid having to rebuild the world to 
> get the C++ packages built)
> - making facter-3 installable via gem for testing and distro packaging 
> (FACT-1523)
> - working on including the Docker-ized Puppet Server stack into CI so new 
> versions are automatically built and uploaded to docker hub along with 
> traditional packages.
> 
> I'd love to hear your feedback (just reply on this thread) on the proposal 
> overall and additional steps that would make your lives easier (with respect 
> to packaging and repos, that is). Although the next major versions won't be 
> out for a few more months, we're looking to make the infrastructure and 
> policy changes before the end of the year, so please chime in.
> 
> --eric0
> 
> --
> You received this message because you are subscribed to the Google Groups 
> "Puppet Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:puppet-dev%[email protected]>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/puppet-dev/A53809BB-4B46-4EC1-86BD-3BAD1EC71C2E%40puppet.com
>  
> <https://groups.google.com/d/msgid/puppet-dev/A53809BB-4B46-4EC1-86BD-3BAD1EC71C2E%40puppet.com>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Puppet Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/puppet-dev/CAC76iT9mW6hWBwo3rb9kzQ3DvieEqLHXb8bRxE__CNPmu%2ByVnQ%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/puppet-dev/CAC76iT9mW6hWBwo3rb9kzQ3DvieEqLHXb8bRxE__CNPmu%2ByVnQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

Eric Sorenson - [email protected] 
director of product, puppet ecosystem

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/71893BAE-EB75-4842-A6B5-E9D5C1B08465%40puppet.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to