Re: [OpenStack-Infra] Selecting New Priority Effort(s)

2018-04-06 Thread Colleen Murphy
On Thu, Apr 5, 2018, at 4:57 PM, Jeremy Stanley wrote:
> On 2018-04-05 14:35:27 + (+), Jens Harbott wrote:
> > 2018-04-04 2:33 GMT+00:00 David Moreau Simard :
> > > It won't be very exciting but we really need to do one of the
> > > following two things soon:
> > >
> > > 1) Ansiblify control plane [1]
> > > 2) Update our puppet things to puppet 4 (or 5?)
> > >
> > > Puppet 3 has been end of life since Dec 31, 2016. [2]
> > >
> > > The longer we draw this out, the more work it'll be :(
> > >
> > > [1]: https://review.openstack.org/#/c/469983/
> > > [2]: https://groups.google.com/forum/#!topic/puppet-users/IdutL5FTW7w
> > 
> > I agree and would vote for option 1), that would also seem to blend
> > well with upgrading to Xenial. Avoid having to invest much effort in
> > making puppet things work for Xenial, like we just discovered would be
> > needed for askbot.
> 
> It's not immediately clear to me how rewriting numerous Puppet
> modules in Ansible avoids having to invest much effort... or is it
> the case that a lot of the things we're installing now have
> corresponding Ansible modules already? Has anyone skimmed through
> https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules.env
> and figured out how many of those seem supported by the existing
> Ansible ecosystem vs how many we'd have to create ourselves?
> -- 
> Jeremy Stanley

The puppet modules are already tested with puppet-apply and beaker on Xenial. 
There should be very little if any effort to ensure they work on Xenial. It is 
a bit hard for me to imagine that a complete rewrite would be easier.

Colleen

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Puppet 4, beaker jobs and the future of our config management

2017-06-20 Thread Colleen Murphy
On Tue, Jun 20, 2017 at 8:40 PM, Jeremy Stanley  wrote:

> A couple weeks ago during our June 6 Infra team meeting,
> discussion[1] about the state of our Ansible Puppet Apply spec[2]
> morphed into concerns over the languishing state of our Beaker-based
> Puppet module integration test jobs, work needed to prepare for
> Puppet 4 now that Puppet 3 is EOL upstream[3] for the past 6 months,
> and the emergence of several possibly competing/conflicting approved
> and proposed Infra specs:
>
>   * Puppet Module Functional Testing[4]
>   * Puppet 4 Preliminary Testing[5]
>   * Rename and expand Puppet 4 Preliminary Testing[6]
>   * Ansiblify control plane[7]
>
> As the discussion evolved, unanswered questions were raised:
>
>   1. What are we going to do to restore public reporting?
>
>   2. Should we push forward with the changes needed to address
>  bitrot on the nonvoting Beaker-based integration jobs so we can
>  start enforcing them on new changes to all our modules?
>
For what it's worth, good progress was made here recently. I was pleased
with how quickly the team was willing to review and merge fixes for this
accumulated bitrot. If anyone wants to help finish up the rest of them and
make these jobs voting I'd be happy to lend guidance.

To provide some context on these jobs, I recall one of the primary reasons
we chose to use beaker-rspec as our functional testing tool was that it was
consistent with what the rest of the puppet community and Puppet-OpenStack
was doing. It is definitely true that both teams have benefited from being
able to solve common problems together. But I don't think it has helped
attract other puppet contributors, and our existing team is less than
enthusiastic about diving into ruby and rspec when these tests are broken.
Luckily, these tests all have fixture manifests ready to go so it would be
reasonably easy to rip out beaker and replace it with something else.

>
>   3. Is the effort involved in upfitting our existing modules to
>  Puppet 4 worth the effort compared to trying to replace Puppet
>  with Ansible (a likely contentious debate lurking here) which
>  might attract more developer/reviewer focus and interest?
>
A total rewrite is a very costly shot in the dark. We have no way to
measure the potential benefits and detriments until after the work is done,
and it will certainly be a significant amount of work.

In this case, though, we have a lot of ansible experts on the team, and the
rising popularity of ansible in OpenStack and in the rest of the devops
world makes it likely to attract new contributors. Contrarily, while the
whole team is competent with puppet, the general attitude toward it has
been fairly negative, and very few people have any interest in the
ruby/rspec-based functional testing. The puppet experts who have so far
pushed the team from puppet 2 to 3 and to functional testing with
beaker-rspec are no longer dedicated full-time to the Infra team. If
sufficient personpower and enthusiasm can be focused toward a rewrite, that
is probably a good indication that it will be healthier and more
sustainable in the long run than our languishing puppet infrastructure.

>
> The meeting was neither long enough nor an appropriate venue for
> deciding these things, so I agreed to start a thread here on the ML
> where we might be able to hash out our position on them a little
> more effectively and inclusive of the wider community involved.
> Everyone with a vested interest is welcome to weigh in, of course.
>
It is really important to remember that the stakeholders here include not
only the upstream Infra team but also 3rd party CI operators and downstream
infrastructure-as-code consumers. If we move away from puppet, we must
provide these stakeholders with a migration plan. If that, on top of
migrating ourselves, proves to be too difficult, then we shouldn't do it.

I plan to keep putting work into the puppet modules and moving toward
puppet 4. I do honestly believe it is within reach and easier in the short
term than a rewrite. But as this is not part of my day job description,
it's not sustainable in the long term unless more volunteers step up. If we
get more momentum toward an ansible rewrite then I will completely support
it.

Colleen

>
> [1] http://eavesdrop.openstack.org/meetings/infra/2017/infra.
> 2017-06-06-19.03.log.html#l-24
> [2] http://specs.openstack.org/openstack-infra/infra-specs/
> specs/ansible_puppet_apply.html
> [3] https://voxpupuli.org/blog/2016/12/22/putting-down-puppet-3/
> [4] http://specs.openstack.org/openstack-infra/infra-specs/
> specs/puppet-module-functional-testing.html
> [5] http://specs.openstack.org/openstack-infra/infra-specs/
> specs/puppet_4_prelim_testing.html
> [6] https://review.openstack.org/449933
> [7] https://review.openstack.org/469983
> --
> Jeremy Stanley
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> 

Re: [OpenStack-Infra] Boston 2017 Summit dinner

2017-05-04 Thread Colleen Murphy
On Fri, Apr 28, 2017 at 2:47 AM, Paul Belanger 
wrote:

> Greetings!
>
> Its that time where we all try to figure out when and where to meet up for
> some
> dinner and drinks in Boston. While I haven't figure out a place to eat
> (suggestion most welcome), maybe we can decide which night to go out.
>
> As a reminder, the summit schedule has 2 events this year that people may
> also
> be attending:
>
>   Mon 8, 6:00pm - 7:30pm - Marketplace Mixer
>   Tue 9, 7:00pm - 10:00pm - StackCity Boston at Fenway Park
>
> Please take a moment to reply, and which day may be better for you.
>
> Would love to attend this, thanks for organizing it.

Sunday: Yes
Monday: maybe (maybe after the mixer?)
Tuesday: Yes-ish
Wednesday: No
Thursday: Yes

Colleen
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Network Requirements for Infracloud Relocation Deployment

2016-03-22 Thread Colleen Murphy
On Mon, Mar 21, 2016 at 10:17 AM, Colleen Murphy <coll...@gazlene.net>
wrote:

> On Thu, Mar 17, 2016 at 11:48 AM, Colleen Murphy <coll...@gazlene.net>
> wrote:
>
>> The networking team at HPE received our request and would like to have a
>> call next week to review it. Our liaison, Venu, should have a draft network
>> diagram we can review. Who would like to join, and what times/days work
>> best? I would propose Tuesday at 1800 UTC (one hour before the Infra
>> meeting).
>>
>> Colleen
>>
> A call has been scheduled for 1800-1845 UTC on Tuesday, March 22.
> Invitations were sent out to folks who expressed interest in attending and
> I can share the meeting phone number and conference ID with anyone who was
> missed.
>
> Allison brought up using the asterisk server for the call and it was not
> responded to - I suspect either they didn't understand or didn't feel
> comfortable with it. It would be my preference to not push the issue, as
> they are extending the invitation to us, not the other way around. Instead
> I can commit to taking and dispersing detailed notes.
>
> Colleen
>
Notes from today's meeting:

1) 1G or 10G?
 - 10G useful for image transfers and mirrors in cloud
 - Venu to connect with DC ops ensure 10G

2) ipv6?
 - Venu to ask verizon to activate /48 block, will take a couple of days

3) how many vlans?
 - one untagged for pxe/management, one tagged for public

4) keep 10.10.16.0/24 for internal network

5) Do we need nic bonding?
 - no

6) Any load balancing requirement?
 - no
 - if we were to add load balancing we would host it ourselves

5) Access requirements?
 - full inbound/outbound internet access - no ports blocked
 - firewalls managed locally

The network diagram will need to have some parts redacted before we can
share it publicly.

Colleen
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Network Requirements for Infracloud Relocation Deployment

2016-03-19 Thread Colleen Murphy
The networking team at HPE received our request and would like to have a
call next week to review it. Our liaison, Venu, should have a draft network
diagram we can review. Who would like to join, and what times/days work
best? I would propose Tuesday at 1800 UTC (one hour before the Infra
meeting).

Colleen
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Network Requirements for Infracloud Relocation Deployment

2016-02-29 Thread Colleen Murphy
On Mon, Feb 29, 2016 at 1:13 PM, Colleen Murphy <coll...@gazlene.net> wrote:

> On Thu, Feb 25, 2016 at 7:40 PM, Jeremy Stanley <fu...@yuggoth.org> wrote:
>
>> On 2016-02-25 17:05:57 -0700 (-0700), Cody A.W. Somerville wrote:
>> [...]
>> > - Allocation of /19 IP network block.
>> [...]
>>
>> Ideally also a /48 of IPv6 addresses routed to the same environment.
>>
>> Oh, and we're going to want reverse DNS for both the IPv4 /19 and
>> IPv6 /48 delegated to (yet to be identified) nameservers under our
>> control.
>>
>> So the request I'm going to send off is:
>
> -  "LR5" HPE network for direct connectivity to the Internet
> - Private management network for management and iLO
>
Updated: Private management network for management and iLO, which needs to
be reachable from our nodes

>
> - Ideally, two drops for the private management network and the public
> network, but at a minimum one drop with one tagged and one untagged network
> - Allocation of /19 IP network block for IPv4 and a /48 for IPv6 within
> the LR5 network
> - Reverse DNS for both network blocks delegated to (yet to be identified)
> nameservers under our control
>
> Colleen
>
>
>
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Network Requirements for Infracloud Relocation Deployment

2016-02-29 Thread Colleen Murphy
On Thu, Feb 25, 2016 at 7:40 PM, Jeremy Stanley  wrote:

> On 2016-02-25 17:05:57 -0700 (-0700), Cody A.W. Somerville wrote:
> [...]
> > - Allocation of /19 IP network block.
> [...]
>
> Ideally also a /48 of IPv6 addresses routed to the same environment.
>
> Oh, and we're going to want reverse DNS for both the IPv4 /19 and
> IPv6 /48 delegated to (yet to be identified) nameservers under our
> control.
>
> So the request I'm going to send off is:

-  "LR5" HPE network for direct connectivity to the Internet
- Private management network for management and iLO
- Ideally, two drops for the private management network and the public
network, but at a minimum one drop with one tagged and one untagged network
- Allocation of /19 IP network block for IPv4 and a /48 for IPv6 within the
LR5 network
- Reverse DNS for both network blocks delegated to (yet to be identified)
nameservers under our control

Colleen
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Mitaka Infra Sprint

2016-01-18 Thread Colleen Murphy
On Wed, Dec 9, 2015 at 9:17 PM, Joshua Hesketh 
wrote:

> Hi all,
> As discussed during the infra-meeting on Tuesday[0], the infra team will
> be holding a mid-cycle sprint to focus on infra-cloud[1].
> The sprint is an opportunity to get in a room and really work through as
> much code and reviews as we can related to infra-cloud while having each
> other near by to discuss blockers, technical challenges and enjoy company.
> Information + RSVP:
> https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint
> Dates:Mon. February 22nd at 9:00am to Thursday. February 25th
> Location:HPE Fort Collins Colorado Office
> Who:Anybody is welcome. Please put your name on the wiki page if you are
> interested in attending.
> If you have any questions please don't hesitate to ask.
> Cheers,Josh + Infra team
> [0]
> http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-12-08-19.00.html[1]
> https://specs.openstack.org/openstack-infra/infra-specs/specs/infra-cloud.html
>
> Since I didn't see one, I started an etherpad for the sprint, and added it
to the wiki page:

https://etherpad.openstack.org/p/mitaka-infra-midcycle

Colleen
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [infra] Nodepool and wiki.openstack.org downtime on 5 January 2016

2016-01-05 Thread Colleen Murphy
On Mon, Jan 4, 2016 at 10:46 AM, Colleen Murphy <coll...@gazlene.net> wrote:

> The site wiki.openstack.org as well as the nodepool service will be
> offline for 30 minutes starting at 2100 UTC on 5 January 2016 while we
> update infrastructure management of these services. The nodepool downtime
> will cause a disruption in the testing infrastructure that will cause jobs
> to be delayed.
>
> If you have questions, please reply to this thread or contact us in
> #openstack-infra.
>
> Colleen
>
Our infrastructure update was completed successfully. Thank you for your
patience.

Colleen
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] [infra] Nodepool and wiki.openstack.org downtime on 5 January 2016

2016-01-04 Thread Colleen Murphy
The site wiki.openstack.org as well as the nodepool service will be offline
for 30 minutes starting at 2100 UTC on 5 January 2016 while we update
infrastructure management of these services. The nodepool downtime will
cause a disruption in the testing infrastructure that will cause jobs to be
delayed.

If you have questions, please reply to this thread or contact us in
#openstack-infra.

Colleen
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Reconcile apache fixes for >= 2.4

2015-11-05 Thread Colleen Murphy
On Wed, Nov 4, 2015 at 10:55 AM, Yolanda Robla Mota <
yolanda.robla-m...@hpe.com> wrote:

> Hello Infra
>
> I want to start a thread about the best way to reconcile the apache fixes
> that we put on place for upgrade to apache >= 2.4
> The are two different ways now:
>
> 1. rely on apache mod_version , and add a check inside apache vhosts:
>
> = 2.4>
>   Require all granted
> 
>
> That is the fix currently on place for puppet-httpd, puppet-cgit, and some
> other modules. It is quite simple, but has the disadvantage of depending on
> mod_version apache module, so every manifest using that needs to ensure
> that mod_version is installed.
>
> 2. Rely on satisfy any:
>
> Allow from all
> Satisfy Any
>
> It doesn't need an extra  check for version, but it is deprecated as shown
> on: https://httpd.apache.org/docs/2.4/howto/auth.html . It also needs
> module mod_access_compat to be present
> in newer apache versions. We currently have this on puppet-zuul.
>
> 3. Another alternatives should be:
> - add a parameter to puppet-httpd module, so we can pass the apache
> version we are expected to have
> - create a custom fact to give us the current apache version in puppet,
> and do the apache check using that fact instead of relying in mod_version
> - use osfamily/operatingsystem/lsbrelease facts to decide about apache
> version, and apply proper directives there
>
> I'd like to get more opinions about how to better proceed with that, and
> ensure that all infra puppet modules are following the same criteria.
>
> Best
>
> I kind of like the idea of offloading this kind of logic into the service
and out of config management, especially since mod_version makes it easy to
do so. If on some terrible day we decide to switch config management tools,
this kind of mindset will make the switchover a tiny bit easier. So I'm a
fan of option 1. I don't know enough about Apache to comment on option 2.
Options in 3 are more puppetty but I don't see a big advantage to any of
them.

Colleen
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [Testing Puppet Modules] Issues with beaker and bundler gem versions

2015-11-04 Thread Colleen Murphy
On Wed, Nov 4, 2015 at 3:01 AM, Colleen Murphy <coll...@gazlene.net> wrote:

> On Tue, Nov 3, 2015 at 1:04 PM, Maite Balhester <mbalh...@thoughtworks.com
> > wrote:
>
>> Hey folks, let me introduce myself first, I’m Maitê and I’m currently
>> working on adding tests in the puppet modules with my team.
>>
>> Last week we started to face errors in gate jobs regarding to the
>> fog-google version (
>> http://logs.openstack.org/28/220228/9/check/gate-openstackci-beaker-trusty-dsvm/7ca0fc1/console.html
>> ).
>>
>> As you can see in this link (
>> https://tickets.puppetlabs.com/browse/BKR-564) this error was fixed in
>> beaker > 2.24.0, but somehow the bundler 1.10.6 is not fetching the latest
>> beaker version (actually it is fetching 2.24.0 version).
>>
>> We can fix the beaker version to 2.27.0 in the Gemfiles for the modules
>> (in our tests this worked fine), but it is not the best approach, since we
>> have a lot of modules and this can be easily outdated.
>>
> I /think/ this should be fixed[1], and hopefully the fix will be released
> soon. The dependency resolution is really confusing here.
>
>> Another point of concern is that vagrant 1.7.x expects bundler (<=
>> 1.10.5, >= 1.5.2), (https://github.com/mitchellh/vagrant/issues/6158)
>> and even if we fix the version, running the tests with bundler 1.10.6 will
>> probably will probably break the job.
>>
> This also looks like it was fixed[2] in beaker so again we just need to
> await a release. I'll try to poke the right people during US west coast
> workday hours.
>
>> If we fix the beaker version, we must fix the bundler version to 1.10.5,
>> and I personally don’t like to specify versions because it is hard to
>> maintain and it can get easily updated. But I don’t see other way to deal
>> with this subject.
>>
>> What do you think? Do we have a better way to deal with this situation?
>>
> A good resource for these types of issues is #puppet or #puppet-dev on
> freenode, or the puppet-users or puppet-dev mailing lists. Module testing
> sits on the boundary between puppet users questions and puppet developers
> questions so either channel or list is appropriate to ask.
>
>> Thanks for your attentions and regards,
>>
>> Colleen
>
> [1]
> https://github.com/puppetlabs/beaker/commit/3625c573e8a59ca634ef2b5d6d0ae9d021e8d2d5
>
> [2] https://github.com/puppetlabs/beaker/pull/1003
>
beaker 2.28.0 was released today, and rechecking
https://review.openstack.org/#/c/220228/ shows that the gem dependency
errors seem to have gone away.

Colleen
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [Testing Puppet Modules] Issues with beaker and bundler gem versions

2015-11-04 Thread Colleen Murphy
On Tue, Nov 3, 2015 at 1:04 PM, Maite Balhester 
wrote:

> Hey folks, let me introduce myself first, I’m Maitê and I’m currently
> working on adding tests in the puppet modules with my team.
>
> Last week we started to face errors in gate jobs regarding to the
> fog-google version (
> http://logs.openstack.org/28/220228/9/check/gate-openstackci-beaker-trusty-dsvm/7ca0fc1/console.html
> ).
>
> As you can see in this link (https://tickets.puppetlabs.com/browse/BKR-564)
> this error was fixed in beaker > 2.24.0, but somehow the bundler 1.10.6 is
> not fetching the latest beaker version (actually it is fetching 2.24.0
> version).
>
> We can fix the beaker version to 2.27.0 in the Gemfiles for the modules
> (in our tests this worked fine), but it is not the best approach, since we
> have a lot of modules and this can be easily outdated.
>
I /think/ this should be fixed[1], and hopefully the fix will be released
soon. The dependency resolution is really confusing here.

> Another point of concern is that vagrant 1.7.x expects bundler (<= 1.10.5,
> >= 1.5.2), (https://github.com/mitchellh/vagrant/issues/6158) and even if
> we fix the version, running the tests with bundler 1.10.6 will probably
> will probably break the job.
>
This also looks like it was fixed[2] in beaker so again we just need to
await a release. I'll try to poke the right people during US west coast
workday hours.

> If we fix the beaker version, we must fix the bundler version to 1.10.5,
> and I personally don’t like to specify versions because it is hard to
> maintain and it can get easily updated. But I don’t see other way to deal
> with this subject.
>
> What do you think? Do we have a better way to deal with this situation?
>
A good resource for these types of issues is #puppet or #puppet-dev on
freenode, or the puppet-users or puppet-dev mailing lists. Module testing
sits on the boundary between puppet users questions and puppet developers
questions so either channel or list is appropriate to ask.

> Thanks for your attentions and regards,
>
> Colleen

[1]
https://github.com/puppetlabs/beaker/commit/3625c573e8a59ca634ef2b5d6d0ae9d021e8d2d5

[2] https://github.com/puppetlabs/beaker/pull/1003
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] puppet-mysql migration discussion

2015-11-02 Thread Colleen Murphy
On Mon, Nov 2, 2015 at 10:49 AM, Paul Belanger 
wrote:

> On Mon, Nov 02, 2015 at 10:27:26AM -0800, Clark Boylan wrote:
> > On Mon, Nov 2, 2015, at 07:58 AM, Paul Belanger wrote:
> > > Greetings,
> > >
> > > I'd like to start a thread talking about the effort to upgrade our
> > > version of
> > > puppet-mysql to a newer / latest version. I know there has been some
> talk
> > > on this already, would somebody mind adding some information?
> > >
> > > I have heard 3 things:
> > >
> > >   1. Remove database support from current modules, and only use
> > >   sql_connection
> > >  strings.
> > >   2. Move everything to trove
> > >   3. Setup mysql cluster
> > I think this is mostly orthogonal to updating the puppet-mysql module
> > because 2 + 1ish (use sql_connection) is basically what we do everywhere
> > but Jenkins slave image builds, paste.openstack.org, and
> > wiki.openstack.org.
> > >
> > > Again, I don't know if there are true or not, just things I have seen
> > > people
> > > talk about.
> > >
> > > The obviously part that is missing, is _how_ we are going to do the
> > > upgrade. I
> > > know some people (clarkb?) already have some ideas on that.
> > >
> > > The reason for me asking, 2 weeks ago I offered to help the infra-cloud
> > > move
> > > forward and upgrading puppet-mysql was one of the items discussed. So,
> > > here I
> > > am, offering to do the grunt work, just need some understanding on what
> > > people
> > > want to do.
> > I was hoping that there was a forward and backward compatible
> > intermediate step we could make where old and new configs were supported
> > but I am told that isn't possible. As a result we will likely need to
> > update puppet-mysql, then all three of the above uses of puppet-mysql
> > semi atomically to keep everything working.
> >
> > Testing that image builds work before the update is simple as we can
> > just run
> > openstack-infra/project-config/nodepool/scripts/prepare_node_bare.sh to
> > see if that works. Paste.o.o is simple and has gone from local Drizzle
> > to Trove to local MySQL without much fuss so I doubt it will have much
> > trouble but a test deployment is easy if we are worried.
> >
> > The tricky one is going to be wiki.openstack.org and this is maybe where
> > the above list is relevant to the discussion. We could move it to an off
> > host database hosted by trove and not worry about updating puppet-mysql
> > in that module at all. Or we can test a deployment of wiki using the
> > newer mysql module before prod deployment. In either case we should
> > probably announce a wiki downtime prior to the upgrade, stop apache/php,
> > perform a database backup, switch to trove/run puppet with newer module,
> > restart apache/php.
> >
> So one of the big things I see, is once we bump puppet-mysql to the newer /
> latest version, all our current nodes will start consuming it (unless we
> stop
> puppet on each server).
>
> One thought I had was some like this:
>
>   1. Create /opt/puppet/modules/old, install current puppet-mysql module
> into
>  it.
>   2. Remove /etc/puppet/modules/mysql and append --modulepath to now
> include
>  /opt/puppet/modules/old
>   3. Add dynamic logic to build modulepath for puppet-mysql based on flag
> and
>  force all nodes to it.
>   4. install puppet-mysql latest into /etc/puppet/modules.
>   5. Start migration process, paste.o.o for example.
>   6. Work through all nodes.
>   7. Disable dynamic modulepath logic (can be used for other module
> upgrades).
>   8. Delete /opt/puppet/modules/old/mysql
>
> Something like this, would allow us to some how control which nodes start
> consuming the newer version of puppet-mysql.  Instead of a massive cutover
> for
> all our nodes.
>
> A possible simpler solution would be to use the load_module_metadata()
function from the stdlib module[1]. This means writing logic into each of
the modules that create a mysql::server to check for the module version and
do the right thing accordingly, e.g.

  $mysql_module_metadata = load_module_metadata('mysql', true)
  if empty($mysql_module_metadata) { #mysql module is too old to support
metadata.json
class { 'mysql::server':
  config_hash => { 'root_password' => 'secret' }
}
  } else { # we could check the actual version with
$mysql_module_metadata['version'] or just assume it's new
class { 'mysql::server':
  root_password => 'secret'
}
  }

Right now the ability to not fail when metadata.json isn't found isn't
released[2] so we'd have to update stdlib to the latest commit in master or
pressure Puppet Labs into releasing sooner than they plan to.

Colleen

[1] https://github.com/puppetlabs/puppetlabs-stdlib#load_module_metadata
[2] https://github.com/puppetlabs/puppetlabs-stdlib/pull/537
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] vcsrepo upstream

2015-09-16 Thread Colleen Murphy
On Wed, Sep 16, 2015 at 2:00 AM, Marton Kiss  wrote:


> It have RedHat support only
>


It supports Ubuntu as well. The git class, which only supports RedHat, just
installs git from repoforge. On Ubuntu we would just manage this with a
package resource. The git resource, which is the replacement for the
vcsrepo resource, supports anything with git installed.



>  I see no new commits in the last 6months.
>


The module was moved to puppet-community (and the class was split out) and
has seen activity in the last two months:

https://github.com/puppet-community/puppet-git_resource

Colleen
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] vcsrepo upstream

2015-09-16 Thread Colleen Murphy
On Wed, Sep 16, 2015 at 11:12 AM, Marton Kiss  wrote:

> Hi Colleen,
>
> Have you ever tried this puppet-community puppet-git_resource module? It
> seems to be that it is not ready, the resource auto-depends on a git class
> here:
>
> https://github.com/puppet-community/puppet-git_resource/blob/04ec35488c4d0d5374c736daa4a7e89fbf3e8d84/lib/puppet/type/git.rb#L86
>
> but it was never declared anywhere, the whole manifests directory is
> missing from the repo. (it was present in nanliu's version)
>
> Brgds,
>   Marton
>
I've not personally used it, no.

An autorequire will add a require relationship to a resource only if that
resource is in the catalog, so autorequiring a class that is no longer in
the module will not cause any problems.

I have not seen the discussion, but what I suspect happened is that there
is another module from puppetlabs that is also called 'git', but it manages
the installation and configuration of git, not git repositories. I expect
they renamed the module and removed the class in order to be compatible
with the other module.

#puppet-community would be a good place to inquire about this module.

Colleen
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Using ensure=running by default in Puppet modules

2015-08-13 Thread Colleen Murphy
On Wed, Aug 12, 2015 at 2:15 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 Change https://review.openstack.org/168306 for puppet-zuul came to
 my attention earlier today when it merged. After a quick discussion
 on IRC, Spencer proposed a revert which I approved so that we can
 get a little more discussion going about this topic.

 First, I'm really sorry I didn't see and weigh in on it sooner (it's
 not like I didn't have time, it was ~4.5 months old). Second, we've
 grown lots of new core reviewers and I'm thrilled they're reviewing
 and approving changes, and I don't want to discourage that in any
 way, so thank you to those of you who did review that change.

 In the past, not using ensure=running on services in our Puppet
 modules was intentional, particularly for more stateful services,
 especially for services which trigger other (possibly remote)
 actions and have a potential to make a mess. It's pretty likely that
 those of us who were around for the earlier discussions about it
 failed to write it down anywhere obvious, leading others to assume
 it's a bug/oversight. I see a couple of obvious solutions though
 there are no doubt others:

 1. Document in each module where we do this, at least in the readme
 and probably also in an inline comment around the service
 definition, that it's that way on purpose. Optionally, make the
 ensure conditional on a class parameter that defaults to unmanaged
 in case some downstreams want to use Puppet like a service manager.

 2. Similar managed/unmanaged parameter, but make it default to
 running and override the default to unmanaged in our
 ::openstack_project classes. This means that we cease consuming our
 modules with the same defaults as downstream users, however if it
 turns out that our OpenStack Infra root sysadmins really do have a
 very different preference from the majority of our downstream
 consumers then at least we can be clear about that.

I am in favor of the managed/unmanaged parameter (I don't have a preference
for the default).

Managing the state of a service is one of the most fundamental features of
puppet[1]. Downstream users of a puppet module will always expect the
module to install packages, change config files, and start services. They
are expecting puppet to do automate everything within a single node. If
they are doing maintenance and they do not want packages installed, config
files changed, or services started, they will disable puppet during the
maintenance window. While this does not appear to fit in with Infra's
workflow, it is a valid use case for downstream users and I believe it
should be allowed via a parameter. Of course documenting the unexpected
behavior is the next best thing.

Colleen

[1] https://docs.puppetlabs.com/puppet_core_types_cheatsheet.pdf

 --
 Jeremy Stanley

 ___
 OpenStack-Infra mailing list
 OpenStack-Infra@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra