[OpenStack-Infra] Retire pabelanger as infra-root

2020-05-25 Thread Paul Belanger
Hello all,

The time has come for me to step down my infra-root duties. Sadly, my
day to day job is no longer directly related to openstack-infra, and
finding it difficult to be involved in 'infra-root' capacity to help the
project.

Thanks to everything on the infra team, everybody is awesome humans! I
hope some time in the future I'll be able to get move involved with the
opendev.org effort, but sadly today isn't that day.

https://review.opendev.org/668192/

Paul


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Touching base; Airship CI cluster

2020-01-22 Thread Paul Belanger
On Wed, Jan 22, 2020 at 02:12:11PM -0800, Clark Boylan wrote:
> On Tue, Jan 7, 2020, at 1:45 AM, Roman Gorshunov wrote:
> > Hello Clark,
> > 
> > Thank you for your reply. Meeting time is OK for me. I have forwarded
> > invitation to Pete Birley and Matt McEuen, they would hopefully join
> > us.
> 
> I wanted to make sure we got a summary of this meeting sent out. Notes were 
> kept at https://etherpad.openstack.org/p/Wqxwce1UDq
> 
> Airship needs to test their All in One deployment tool. This tool deploys 
> their entire bootstrapping system into a single VM which is then used to 
> deploy other software which may be clustered. Because the production usage of 
> this tool is via a VM it is actually important to be able to test the 
> contents of that VM in CI and that is what creates the memory requirements 
> for Airship CI.
> 
> We explained the benefits of being able to run Airship CI on less special 
> hardware. Airship gains redundancy as more than one provider can supply these 
> resources, reliability should be improved as nested virt has been known to be 
> flaky, and better familiarity within the community with global resources 
> means that debugging and working together is easier.
> 
> However, we recognize that Airship has specific constraints today that 
> require more specialized testing. The proposed plan is to move forward with 
> adding a new cloud, but have it provide specialized and generic resources. 
> The intent is to address Airship's needs of today with the expectation that 
> they will work towards running on the generic resources. Having generic 
> resources ensures that the infra team has exposure to this new cloud outside 
> the context of Airship. This improves familiarity and debuggability of the 
> system. It is also more equitable as other donations are globally used. Also, 
> Nodepool doesn't actually allow us to prevent consumption of resources 
> exposed to the system; however, we would ask that specialized resources only 
> be used when necessary to test specific cases as with Airship. This is 
> similar to our existing high memory, multi numa node, nested virt enabled 
> test flavors.
> 
> For next steps we'll work to add the new cloud with the two sets of flavors, 
> and Airship will begin investigating what a modified test setup looks like to 
> run on our generic resources. We'll see where that takes us.
> 
> Let me know if this summary needs editing or updating.
> 
> Finally, we'll be meeting again Wednesday January 29, 2020 at 1600UTC to 
> followup on any questions now that things should be moving. I recently used 
> jitsi meet and it worked really well so want to give that a try for this. 
> Lets meet at https://meet.jit.si/AirshipCICloudFun. Fungi says you can click 
> the circled "i" icon at that url to get dial in info if necessary.
> 
> If for some reason jitsi doesn't work we'll fall back to the method used last 
> time: https://wiki.openstack.org/wiki/Infrastructure/Conferencing room 6001.
> 
Regarding the dedicated cloud, it might be an interesting discussion
point to talk with some of the TripleO folks from when
tripleo-test-cloud-rh1 cloud was still a thing. As most infra people
know, this was a cloud dedicated to running tripleo specific jobs.

There was an effort to make there jobs more generic, to run on any cloud
infrastructure, which resulted IMO, to a large increase of testing (as
there was much more capacity). While it took a bit of effort, I believe
overall it was a better improvement for CI.

Paul


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [zuul-jobs] configure-mirrors: deprecate mirroring configuration for easy_install

2019-11-25 Thread Paul Belanger
On Mon, Nov 25, 2019 at 04:02:13PM +1100, Ian Wienand wrote:
> Hello,
> 
> Today I force-merged [5] to avoid widespread gate breakage.  Because
> the change is in zuul-jobs, we have a policy of annoucing
> deprecations.  I've written the following but not sent it to
> zuul-announce (per policy) yet, as I'm not 100% confident in the
> explanation.
> 
> I'd appreciate it if, once proof-read, someone could send it out
> (modified or otherwise).
> 
> Thanks,
> 
Greetings!

Rather then force merge, and potential break other zuul installs. What
about a new feature flag, that was still enabled but have openstack base
jobs disabled?  This would still allow older versions of setuptools to
work I would guess?

That said, ansible Zuul is not affected as we currently fork
configure-mirrors for our open puproses, I'll check now that we are also
not affected.

> -i
> 
> --
> 
> Hello,
> 
> The recent release of setuptools 42.0.0 has broken the method used by
> the configure-mirrors role to ensure easy_install (the older method of
> install packages, before pip became in widespread use [1]) would only
> access the PyPi mirror.
> 
> The prior mirror setup code would set the "allow_hosts" whitelist to
> the mirror host exclusively in pydistutils.cfg.  This would avoid
> easy_install "leaking" access outside the specified mirror.
> 
> Change [2] in setuptools means that pip is now used to fetch packages.
> Since it does not implement the constraints of the "allow_hosts"
> setting, specifying this option has become an error condition.  This
> is reported as:
> 
>  the `allow-hosts` option is not supported 'when using pip to install 
> requirements
> 
> It has been pointed out [3] that this prior code would break any
> dependency_links [4] that might be specified for the package (as the
> external URLs will not match the whitelist).  Overall, there is no
> desire to work-around this behaviour as easy_install is considered
> deprecated for any current use.
> 
> In short, this means the only solution is to remove the now
> conflicting configuration from pydistutils.cfg.  Due to the urgency of
> this update, it has been merged with [5] before our usual 2-week
> deprecation notice.
> 
> The result of this is that older setuptools (perhaps in a virtualenv)
> with jobs still using easy_install may not correctly access the
> specified mirror.  Assuming jobs have access to PyPi they would still
> work, although without the benefits of a local mirror.  If such jobs
> are firewalled from usptream they may now fail.  We consider the
> chance of jobs using this legacy install method in this situation to
> be very low.
> 
> Please contact zuul-discuss [6] with any concerns.
> 
> We now return you to your regularly scheduled programming :)
> 
> [1] https://packaging.python.org/discussions/pip-vs-easy-install/
> [2] 
> https://github.com/pypa/setuptools/commit/d6948c636f5e657ac56911b71b7a459d326d8389
> [3] https://github.com/pypa/setuptools/issues/1916
> [4] https://python-packaging.readthedocs.io/en/latest/dependencies.html
> [5] https://review.opendev.org/695821
> [6] http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
> 
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Low Key Monday Evening Get Together

2019-04-24 Thread Paul Belanger
On Wed, Apr 24, 2019 at 12:31:05PM -0400, Clark Boylan wrote:
> On Wed, Apr 24, 2019, at 9:14 AM, Clark Boylan wrote:
> > Hello Infra!
> > 
> > Monday evening is looking like a good night to have a low key informal 
> > team gathering. The only official event on the calendar is the 
> > Marketplace Mixer which runs until 7pm. Weather permitting I'd like to 
> > head back to Lowry Beer Garden (where we've gone at past PTGs). It is a 
> > bit out of the way from the conference center so we will need to 
> > coordinate uber/lyft transport but that hasn't been a problem in the 
> > past.
> > 
> > Lets meet at 6:30pm (I can send specific location once onsite) and head 
> > over to Lowry. If the weather looks terrible I can call ahead and see 
> > if their indoor area is open and if they say it is "meh" we'll find 
> > something closer to the conference center. Also, they close at 9pm 
> > which should force us to get some sleep :)
> > 
> > Finally, Monday is a good day because it is gertty's birthday. Hope to 
> > see you then.
> > 
> 
> Also Tuesday night is the official Party event. Sign up at 
> https://www.eventbrite.com/e/the-denver-party-during-open-infrastructure-summit-tickets-58863817262
>  with password "denver". If you are like me and don't want to give out a 
> phone number just enter "555-555-" in that field.
> 
> These details apparently went out to attendees already but I didn't get said 
> email so posting it here just in case it is useful for others too.
>
Since I won't be in Denver next week, have a great time. The beer garden
is a great choice.

- Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding index and views/dashboards for Kata to ELK stack

2018-11-27 Thread Paul Belanger
On Tue, Nov 27, 2018 at 06:53:16PM +, Whaley, Graham wrote:
> (back to an old thread... this has rippled near the top of my pile again)
> 
> > -Original Message-
> > From: Clark Boylan [mailto:cboy...@sapwetik.org]
> > Sent: Tuesday, October 23, 2018 6:03 PM
> > To: Whaley, Graham ; openstack-
> > in...@lists.openstack.org; thie...@openstack.org
> > Cc: Ernst, Eric ; fu...@yuggoth.org
> > Subject: Re: Adding index and views/dashboards for Kata to ELK stack
> [snip]
> > > I don't think the Zuul Ansible role will be applicable - the metrics run
> > > on bare metal machines running Jenkins, and export their JSON results
> > > via a filebeat socket. My theory was we'd then add the socket input to
> > > the logstash server to receive from that filebeat - as in my gist at
> > >
> > https://gist.github.com/grahamwhaley/aa730e6bbd6a8ceab82129042b186467
> > 
> > I don't think we would want to expose write access to the unauthenticated
> > logstash and elasticsearch system to external systems. The thing that makes 
> > this
> > secure today is we (community infrastructure team) control the existing 
> > writers.
> > The existing writers are available for your use (see below) should you 
> > decide to
> > use them.
> 
> My theory was we'd secure the connection at least using the logstash/beat SSL 
> connection, and only we/the infra group would have access to the keys:
> https://www.elastic.co/guide/en/beats/filebeat/current/configuring-ssl-logstash.html
> 
> The machines themselves are only accessible by the CNCF CIL owners and 
> nominated Kata engineers with the keys.
>
> > 
> > >
> > > One crux here is that the metrics have to run on a machine with
> > > guaranteed performance (so not a shared/virtual cloud instance), and
> > > hence currently run under Jenkins and not on the OSF/Zuul CI infra.
> > 
> > Zuul (by way of Nodepool) can speak to arbitrary machines as long as they 
> > speak
> > an ansible connection protocol. In this case the default of ssh would 
> > probably
> > work when tied to nodepool's static instance driver. The community
> > infrastructure happens to only talk to cloud VMs today because that is what 
> > we
> > have been given access to, but should be able to talk to other resources if
> > people show up with them.
> 
> If we ignore the fact that all current Kata CI is running on Jenkins, and we 
> are not presently transitioning to Zuul afaik, then
> Even if we did integrate the bare metal CNCF CIL packet.net machines vi 
> ansible/SSH/nodepool/Zuul, then afaict you'd still be running the same CI 
> tasks on the same machines and injecting the Elastic data through the same 
> SSL socket/tunnel into Elastic.

John Studarus at OpenStack summit gave a talk about using zuul and
packet.net, during the talk he mentioned starting to work on a nodepool
driver for packet.net bare metal servers.  I believe the plan is to
upstream it, which then allows for both static and packet.net dynamic
provider.

> I know you'd like to keep as much of the infra under your control, but the 
> only bit I think that would be different is the Jenkins Master. Given the 
> Jenkins job running the slave only executes master branch merges, which have 
> undergone peer review (which would be the same jobs that Zuul would run), 
> then I'm not sure there is any security difference here in reality between 
> having the Kata Jenkins master or Zuul drive the slaves.
> 
> > 
> > >
> > > Let me know you see any issues with that Jenkins/filebeat/socket/JSON 
> > > flow.
> > >
> > > I need to deploy a new machine to process master branch merges to
> > > generate the data (currently we have a machine that is processing PRs at
> > > submission, not merge, which is not the data we want to track long
> > > term). I'll let you know when I have that up and running. If we wanted
> > > to move on this earlier, then I could inject data to a test index from
> > > my local test setup - all it would need I believe is the valid keys for
> > > the filebeat->logstash connection.
> 
> Oh, I've deployed a Jenkins slave and job to test out the first stage of the 
> flow btw:
> http://jenkins.katacontainers.io/job/kata-metrics-runtime-ubuntu-16-04-master/
> 
> > >
> > > > Clark
> > > Thanks!
> > >   Graham (now on copy ;-)
> > 
> > Ideally we'd make use of the existing community infrastructure as much as
> > possible to make this sustainable and secure. We are happy to modify our
> > existing tooling as necessary to do this. Update the logstash 
> > configuration, add
> > Nodepool resources, have grafana talk to elasticsearch, and so on.
> 
> I think the only key decision is if we can use the packet.net slaves as 
> driven by the kata Jenkins master, or if we have to move the management of 
> those into Zuul.
> For expediency and consistency with the rest of the Kata CI, obviously I lean 
> heavily towards Jenkins.
> If we do have to go with Zuul, then I think we'll have to work out who has 
> access to and how they can modify the Z

Re: [OpenStack-Infra] Adding index and views/dashboards for Kata to ELK stack

2018-10-05 Thread Paul Belanger
On Fri, Oct 05, 2018 at 01:14:21PM -0700, Clark Boylan wrote:
> On Wed, Oct 3, 2018, at 3:16 AM, Whaley, Graham wrote:
> > Hi Infra team.
> > 
> > First, a brief overview for folks who have not been in this loop.
> > Kata Containers has a CI that runs some metrics, and spits out JSON 
> > results. We'd like to store those results in the OSF logstash ELK infra 
> > (http://logstash.openstack.org/#/dashboard/file/logstash.json), and set 
> > up some Kibana views and dashboards so we can monitor historical trends 
> > and project health (and longer term maybe some advanced trend regression 
> > triggers).
> > 
> > There is a relevant github Issue/thread showing a working PoC here:
> > https://github.com/kata-containers/ci/issues/60#issuecomment-426579084
> > 
> > I believe we are at the point where we should set up a trial index and 
> > keys (for the filebeat/logstash/elastic injection) etc. so we can start 
> > to tie the parts together.
> > What do I need to make that next step happen? Do I need to open a formal 
> > request ticket/Issue somewhere, or can we just thrash it out here?
> 
> Here and in code review is fine.
> 
> > 
> > This email is part aimed at ClarkB :-), but obviously may involve others 
> > etc. Input most welcome.
> 
> Currently all of our jobs submit jobs to the logstash processing system via 
> the ansible role here [0]. That is then fed through our logstash 
> configuration [1]. The first step in this is probably to update the logstash 
> configuration and update the existing Kata jobs in zuul to submit jobs to 
> index that information as appropriate?
> 
> As for Kibana views I'm not sure we ever sorted that out with the in browser 
> version of kibana we are running. I think we can embed dashboard information 
> in the kibana "source" or inject it into elasticsearch? This piece would take 
> some investigation as I don't know what needs to be done off the top of my 
> head. Note that our elasticsearch can't be written to via the public 
> interface to it as there are no authentication and authorization controls for 
> elasticsearch.
> 
> [0] 
> https://git.openstack.org/cgit/openstack-infra/project-config/tree/roles/submit-logstash-jobs/defaults/main.yaml
> [1] 
> https://git.openstack.org/cgit/openstack-infra/logstash-filters/tree/filters/openstack-filters.conf
> 
Yah, i think it would be great is we could use the JJB / grafyaml
workflow here where we store the view / dashboard in yaml then write a
job to push them into the application.  Then we don't need to deal with
public authentication.

-Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Continued discussion of Winterscale naming

2018-09-12 Thread Paul Belanger
On Wed, Sep 12, 2018 at 02:26:09PM -0700, Clark Boylan wrote:
> On Wed, Aug 29, 2018, at 11:06 AM, Clark Boylan wrote:
> > On Tue, Aug 21, 2018, at 11:33 AM, Jeremy Stanley wrote:
> > > The Infra team has been brainstorming and collecting feedback in
> > > https://etherpad.openstack.org/p/infra-services-naming as to what
> > > actual name/domain the Winterscale effort will use. If you've not
> > > been following along, the earlier discussions can be found in the
> > > following mailing list threads:
> > > 
> > > http://lists.openstack.org/pipermail/openstack-infra/2018-May/005957.html
> > > http://lists.openstack.org/pipermail/openstack-infra/2018-July/006010.html
> > > 
> > > So far the "short list" at the bottom of the pad seems to contain
> > > only one entry: OpenDev.
> > > 
> > > The OpenStack Foundation has offered to let us take control of
> > > opendev.org for this purpose if we would like. They have it mainly
> > > as a defensive registration to protect the OpenDev Conference brand,
> > > but use a separate opendevconf.org domain for that at present. The
> > > opendev.org domain is just sitting parked on the same nameservers as
> > > openstack.org right now, not in use for anything. The brand itself
> > > has a positive connotation in the community so far, and the OpenDev
> > > Conferences share a lot of similar goals (bringing communities of
> > > people together to collaborate openly on design and development
> > > activities) so this even provides some useful synergy around the
> > > name and possible promotional tie-ins with the events if we like.
> > > 
> > > I know lots of us are eager to move forward on the rebranding, so I
> > > wanted to wake this discussion back up and try to see if we can
> > > drive it to a conclusion. As we continue to take on hosting for
> > > additional large projects, having somewhere for them to send the
> > > contributors and users in their community which has a distinct
> > > identity unclouded by OpenStack itself will help make our services
> > > much more welcoming. If you don't particularly like the OpenDev idea
> > > or have alternatives you think might achieve greater consensus
> > > within our team and present a better image to our extended
> > > community, present and future, please update the above mentioned pad
> > > or follow up to this mailing list thread. Thanks!
> > 
> > I am a fan of OpenDev. I think it gives a path forward that works for 
> > the immediate future and long term. https://review.opendev.org seems 
> > like a reasonable place to do code review for a project :)
> > 
> > I do think it would be good to continue collecting input particularly 
> > from those involved in the day to day infra activities. If we can reach 
> > rough consensus over the next week or so that would give us the 
> > opportunity to use time at the PTG to do a rough sketch of how we can 
> > start "migrating" to the new name.
> > 
> > Your feedback much appreciated.
> > 
> 
> It has been about 3 weeks and the feedback so far has largely been positive. 
> The one concern we have seen raised is that this may be confusing with the 
> OpenDev conference. Fungi makes a great argument that cross promoting between 
> the conference and the development tooling can be a net positive since many 
> of our goals overlap.
> 
> Finding that argument compelling myself, and not seeing any counterarguments 
> I think we should move forward with using the OpenDev name and opendev.org 
> domain for the team and services that we host.
> 
> Long story short let's go ahead and use this name and start making progress 
> on this effort. Next stop: etherpad.opendev.org.
> 
> Thank you,
> Clark
> 
+1 for etherpad.opendev.org

- Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Retiring some repos?

2018-08-29 Thread Paul Belanger
On Wed, Aug 29, 2018 at 07:03:29AM +0200, Andreas Jaeger wrote:
> I digged into the remaining and will investigate whether we can retire them:
> 
> openstack-infra/zuul-packaging
> 
>   Paul, should we abandon this one as well?
> 
Yes

> openstack-infra/featuretracker (together with puppet-featuretracker)
> 
>   these work together with openstack/development-proposals from the
>   product working group, I'll reach out to them.
> 
> puppet-docker-registry:
>   See https://review.openstack.org/#/c/399221/ - there's already another
>   tool which we should use, never any commits merged to it.
> 
> pynotedb
>   I'll reach out to storyboard team.
> 
> Andreas
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Continued discussion of Winterscale naming

2018-08-21 Thread Paul Belanger
On Tue, Aug 21, 2018 at 06:33:06PM +, Jeremy Stanley wrote:
> The Infra team has been brainstorming and collecting feedback in
> https://etherpad.openstack.org/p/infra-services-naming as to what
> actual name/domain the Winterscale effort will use. If you've not
> been following along, the earlier discussions can be found in the
> following mailing list threads:
> 
> http://lists.openstack.org/pipermail/openstack-infra/2018-May/005957.html
> http://lists.openstack.org/pipermail/openstack-infra/2018-July/006010.html
> 
> So far the "short list" at the bottom of the pad seems to contain
> only one entry: OpenDev.
> 
> The OpenStack Foundation has offered to let us take control of
> opendev.org for this purpose if we would like. They have it mainly
> as a defensive registration to protect the OpenDev Conference brand,
> but use a separate opendevconf.org domain for that at present. The
> opendev.org domain is just sitting parked on the same nameservers as
> openstack.org right now, not in use for anything. The brand itself
> has a positive connotation in the community so far, and the OpenDev
> Conferences share a lot of similar goals (bringing communities of
> people together to collaborate openly on design and development
> activities) so this even provides some useful synergy around the
> name and possible promotional tie-ins with the events if we like.
> 
> I know lots of us are eager to move forward on the rebranding, so I
> wanted to wake this discussion back up and try to see if we can
> drive it to a conclusion. As we continue to take on hosting for
> additional large projects, having somewhere for them to send the
> contributors and users in their community which has a distinct
> identity unclouded by OpenStack itself will help make our services
> much more welcoming. If you don't particularly like the OpenDev idea
> or have alternatives you think might achieve greater consensus
> within our team and present a better image to our extended
> community, present and future, please update the above mentioned pad
> or follow up to this mailing list thread. Thanks!
> -- 
> Jeremy Stanley
>
+1 Exciting!

- Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [zuul] Change publication interface to be directories on node, not executor

2018-08-02 Thread Paul Belanger
On Tue, Oct 10, 2017 at 05:42:12PM -0500, Monty Taylor wrote:
> Hey everybody!
> 
> I'd like to make a proposal for changing how we do logs/artifacts/docs
> collection based on the last few weeks/months of writing things- and of
> having to explain to people how to structure build and publish jobs over the
> last couple of weeks.
> 
> tl;dr - I'd like to change the publication interface to be directories on
> the remote node, not directories on the executor
> 
> Rationale
> =
> 
> If jobs have to copy files back to the executor as part of the publication
> interface, then the zuul admins can't change the mechanism of how artifacts,
> logs or docs are published without touching a ton of potentially in-tree job
> content.
> 
> Doing so should also allow us to stop having a second copy of build logic in
> the artifact release jobs.
> 
> Implementation
> ==
> 
> Define a root 'output' dir on the remote nodes. Different types of output
> can be collected by putting them into subdirectories of that dir on the
> remote nodes and expecting that base jobs will take care of them.
> 
> People using jobs defined in zuul-jobs should define a variable
> "zuul_output_dir", either in site-variables or in their base job. Jobs in
> zuul-jobs can and will depend on that variable existing - it will be
> considered part of the base job interface zuul-jobs expects.
> 
> Jobs in zuul-jobs will recognize three specific types of job output:
> * logs
> * artifacts
> * docs
> 
> Jobs in zuul-jobs will put job outputs into "{{ zuul_output_dir }}/logs",
> "{{ zuul_ouptut_dir }}/artifacts" and "{{ zuul_output_dir }}/docs" as
> appropriate.
> 
> A role will be added to zuul-jobs that can be used in base jobs that will
> ensure those directories all exist.
> 
> Compression
> ---
> 
> Deployers may choose to have their base job compress items in {{
> zuul_output_dir }} as part of processing them, or may prefer not to
> depending on whether CPU or network is more precious. Jobs in zuul-jobs
> should just move/copy things into {{ zuul_output_dir }} on the node and
> leave compression, transfer and publication as a base-job operation.
> 
> Easy Output Copying
> ---
> 
> A role will also be added to zuul-jobs to facilitate easy/declarative output
> copying.
> 
> It will take as input a dictionary of files/folders named
> 'zuul_copy_output'. The role will copy contents into {{ zuul_output_dir }}
> on the remote node and is intended to be used before output fetching in a
> base job's post-playook.
> 
> The input will be a dictionary so that zuul variable merging can be used for
> accumulating.
> 
> Keys of the dictionary will be things to copy. Valid values will be the type
> of output to copy:
> 
> * logs
> * artifacts
> * docs
> * null   # ansible null, not the string null
> 
> null will be used for 'a parent job said to copy this, but this job wants to
> not do so'
> 
> The simple content copying role will not be flexible or powerful. People
> wanting more expressive output copying have all of the normal tools for
> moving files around at their disposal. It will obey the following rules:
> 
> * All files/folders will be copied if they exist, or ignored if they don't
> * Items will be copied as-if 'cp -a' had been used.
> * Order of files is undefined
> * Conflicts between declared files are an error.
> 
> Jobs defined in zuul-jobs should not depend on the {{ zuul_copy_output }}
> variable for content they need copied in place on a remote node. Jobs
> defined in zuul-jobs should instead copy their output to {{ zuul_output_dir
> }} This prevents zuul deployers from being required to put the easy output
> copying role into their base jobs. Jobs defined in zuul-jobs may use the
> role behind the scenes.
> 
> Filter Plugin
> -
> 
> Since the pattern of using a dictionary in job variables to take advantage
> of variable merging is bound to come up more than once, we'll define a
> filter plugin in zuul called 'zuul_list_from_value' (or some better name)
> that will return the list of keys that match a given value. So that given
> the following in a job defintion:
> 
> vars:
>   zuul_copy_output:
> foo/bar.html: logs
> other/logs/here.html: logs
> foo/bar.tgz: artifacts
> 
> Corresponding Ansible could do:
> 
> - copy:
> src: {{ item }}
> dest: {{ zuul_log_dir }}
>   with_items: {{ zuul_copy_output | zuul_list_from_value(logs) }}
> 
> For OpenStack Today
> ===
> 
> We will define zuul_output_dir to be "{{ ansible_user_dir }}/zuul-output" in
> our site-variables.yaml file.
> 
> Implement the following in OpenStack's base job:
> 
> We will have the base job will include the simple copying role.
> 
> Logs
> 
> 
> Base job post playbook will always grab {{ zuul_output_dir }}/logs from
> nodes, same as today:
> * If there are more than one node, grab it into {{ zuul.executor.log_dir
> }}/{{ inventory_hostname }}.
> * If only one node, grab into  {{ zuul.execut

Re: [OpenStack-Infra] Reworking zuul base jobs

2018-08-02 Thread Paul Belanger
On Mon, Jul 23, 2018 at 11:22:13AM -0400, Paul Belanger wrote:
> Greetings,
> 
> A few weeks ago, I sent an email to the zuul-discuss[1] ML talking about the
> idea of splitting a base job in project-config into trusted / untrusted parts.
> Since then we've actually implemented the idea in rdoproject.org zuul and 
> seems
> to be working very well.
> 
> Basically, I'd like to do the same here with our base job but first wanted to
> give a heads up.  Here is the basic idea:
> 
> 
>   project-config (trusted)
>   - job:
>   name: base-minimal
>   parent: null
>   description: top-level job
> 
>   - job:
>   name: base-minimal-test
>   parent: null
>   description: top-level job for testing base-minimal
> 
>   openstack-zuul-jobs (untrusted)
>   - job:
>   name: base
>   parent: base-minimal
> 
> This then allows us to start moving tasks / roles like configure-mirrors from
> trusted into untrusted, since it doesn't really need trusted context on the
> executor.
> 
> In rdoproject, our base-minimal job is much smaller then openstack-infra 
> today,
> but really has just become used for handling secrets (post-run playbooks) and
> zuul_stream (pre). Everything else has been moved into untrusted.
> 
> Here, we likely need to have a little more discussion around what we move into
> untrusted from trusted, but once we've done the dance to place base into
> openstack-zuul-jobs and parent to base-minimal in project-config, we can start
> testing.
> 
> We'd still need to do the base-minimal / base-minimal-test dance for trusted
> context, but it should be much smaller the things we need to test. As a 
> working
> example, the recent changes to pypi mirrors I believe would have been much
> easier to test in this setup.
> 
> - Paul
> 
> [1] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-July/000508.html

I've gone ahead an pushed up a stack of changes at topic:base-minimal-jobs[2].
We need to rename the current base-minimal job to base-ozj first, then
depending on how minimal the new base-minimal is, we might be able to remove
base-ozj once everything is finished.

- Paul

[2] https://review.openstack.org/#/q/topic:base-minimal-jobs

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Reworking zuul base jobs

2018-07-23 Thread Paul Belanger
Greetings,

A few weeks ago, I sent an email to the zuul-discuss[1] ML talking about the
idea of splitting a base job in project-config into trusted / untrusted parts.
Since then we've actually implemented the idea in rdoproject.org zuul and seems
to be working very well.

Basically, I'd like to do the same here with our base job but first wanted to
give a heads up.  Here is the basic idea:


  project-config (trusted)
  - job:
  name: base-minimal
  parent: null
  description: top-level job

  - job:
  name: base-minimal-test
  parent: null
  description: top-level job for testing base-minimal

  openstack-zuul-jobs (untrusted)
  - job:
  name: base
  parent: base-minimal

This then allows us to start moving tasks / roles like configure-mirrors from
trusted into untrusted, since it doesn't really need trusted context on the
executor.

In rdoproject, our base-minimal job is much smaller then openstack-infra today,
but really has just become used for handling secrets (post-run playbooks) and
zuul_stream (pre). Everything else has been moved into untrusted.

Here, we likely need to have a little more discussion around what we move into
untrusted from trusted, but once we've done the dance to place base into
openstack-zuul-jobs and parent to base-minimal in project-config, we can start
testing.

We'd still need to do the base-minimal / base-minimal-test dance for trusted
context, but it should be much smaller the things we need to test. As a working
example, the recent changes to pypi mirrors I believe would have been much
easier to test in this setup.

- Paul

[1] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-July/000508.html

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] zuulv3 feedback for 3pci

2018-07-05 Thread Paul Belanger
Greetings,

Over the last few weeks I've been helping the RDO project migrate away from
zuulv2 (jenkins) to zuulv3. Today all jobs have been migrated with the help of
the zuul-migrate script. We'll start deleting jenkins bits in the next few days.

I wanted to get down some things I've noticed in the process as feedback to
thirdparty CI operators. Hopefully this will help others.

Removal of zuul-cloner
--

This by far was the largest issue we had in RDO project. The first thing it
meant, was the need for much more HDD space. We almost quadrupled the storage
quota needed to run zuulv3 properly because we no longer could zuul-cloner from
git.o.o.

Right now rdo is running 4 zuul-executors / 4 zuul-mergers, with the increase in
storage requirements this also meant we needed faster disks.  The previous
servers used under zuulv2 couldn't handle the IO now required, so we've had to
rebuild them backed with SSD. Previously they could be boot from volume to ceph.

Need for use-cached-repos
-

Today, use-cached-repos is only available to openstack-infra/project-config, we
should promote this into zuul-jobs to help reduce the amount of pressure on
zuul-executors when jobs start. In the case of 3pci, prepare-workspace role
isn't up to the task to sync everything at once.

The feedback here, is to some how allow the base job to be smart enough to work
if a project is found in /opt/git or not.  Today we have 2 different images in
rdo, 1 has the cache of upstream git.o.o and other doesn't.

Namespace projects with fqdn


This one is likely unique to rdoproject, but because we have 2 connection to
different gerrit systems, review.rdoproject.org and git.openstack.org, we
actually have duplicate project names. For example:

  openstack/tripleo-common

which means, for zuul we have to write projects as:

  project:
name: git.openstack.org/openstack/tripleo-common

  project:
name: review.openstack.org/openstack/tripleo-common

There are legacy reasons for this, and we plan on cleaning review.r.o, however
because of this duplication we cannot use upstream jobs right now. My initial
thought would be to update jobs, in this case devstack to use the following for
required-projects:

  required-projects:
- git.openstack.org/openstack-dev/devstack
- git.openstack.org/openstack/tripleo-common

and propose the patch upstream.  Again, this is likely specific to rdoproject,
but something right now that blocks them on loading jobs from zuul.o.o.

I do have some other suggestions, but they are more specific to zuul. I could
post them here as a follow up or on zuul ML.

I am happy I was able to help in the original migration of the openstack
projects from jenkins to zuulv3, it did help a lot when I was debugging zuul
failures. But over all rdo project didn't have any major issues with job 
content.

Thanks,
Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward

2018-04-20 Thread Paul Belanger
On Fri, Apr 20, 2018 at 05:34:09PM +, Jeremy Stanley wrote:
> On 2018-04-20 12:31:24 -0400 (-0400), Paul Belanger wrote:
> [...]
> > That is fine, if we want to do the mass migration to bionic first,
> > then start looking at which projects are still using
> > bindep-fallback.txt is fine with me.
> > 
> > I just wanted to highlight I think it is time we start pushing a
> > little harder on projects to stop using this logic and start
> > managing bindep.txt themself.
> 
> Yep, this is something I _completely_ agree with. We could even
> start with a deprecation warning in the fallback path so it starts
> showing up more clearly in the job logs too.
> -- 
> Jeremy Stanley

Okay, looking at codesearch.o.o, I've been able to start pushing up changes to
remove bindep-fallback.txt.

https://review.openstack.org/#/q/topic:bindep.txt

This adds bindep.txt to projects that need it, and also removes the legacy
install-distro-packages.sh scripts in favor of our bindep role.

Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] PTG September 10-14 in Denver

2018-04-20 Thread Paul Belanger
On Fri, Apr 20, 2018 at 10:42:48AM -0700, Clark Boylan wrote:
> Hello everyone,
> 
> I've been asked if the Infra team plans to attend the next PTG in Denver. My 
> current position is that it would be good to attend as a team as I think it 
> will give us a good opportunity to work on modernizing config management 
> efforts. But before I go ahead and commit to that it would be helpful to get 
> a rough headcount of who intends to go (if it will just be me then likely 
> don't need to have team space).
> 
> Don't worry if you don't have approval yet or have to sort out other details. 
> Mostly just interested in a "do we intend on being there or not" type of 
> answer.
> 
> More details on the event can be found at 
> http://lists.openstack.org/pipermail/openstack-dev/2018-April/129564.html. 
> Feel free to ask questions if that will help you too.
> 
> Let me know (doesn't have to be to the list if you aren't comfortable with 
> that) and thanks!
> 
Intend on being there (pending travel approval)

-Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward

2018-04-20 Thread Paul Belanger
On Fri, Apr 20, 2018 at 09:13:17AM -0700, Clark Boylan wrote:
> On Fri, Apr 20, 2018, at 9:01 AM, Jeremy Stanley wrote:
> > On 2018-04-19 19:15:18 -0400 (-0400), Paul Belanger wrote:
> > [...]
> > > today ubuntu-bionic does seem to pass properly with
> > > bindep-fallback.txt, but perhaps we prime it with a bad package on
> > > purpose to force the issue. As clarkb points out, the downside to
> > > this it does make it harder for projects to be flipped to
> > > ubuntu-bionic.
> > [...]
> > 
> > My main concern is that this seems sort of at odds with how we
> > discussed simply forcing all PTI jobs from ubuntu-xenial to
> > ubuntu-bionic on master branches rather than giving projects the
> > option to transition on their own timelines (which worked out pretty
> > terribly when we tried being flexible with them on the ubuntu-trusty
> > to ubuntu-xenial transition a couple years ago). Adding a forced
> > mass migration to in-repo bindep.txt files at the same moment we
> > also force all the PTI jobs to a new platform will probably result
> > in torches and pitchforks.
> 
> Yup, this was my concern as well. I think the value of not being on older 
> platforms outweighs needing to manage a list of packages for longer. We 
> likely just need to keep pushing on projects to add/update bindep.txt in repo 
> instead. We can run a logstash query against job-output.txt looking for 
> output of using the fallback file and nicely remind projects if they show up 
> on that list.
> 
That is fine, if we want to do the mass migration to bionic first, then start
looking at which projects are still using bindep-fallback.txt is fine with me.

I just wanted to highlight I think it is time we start pushing a little harder
on projects to stop using this logic and start managing bindep.txt themself.

-Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward

2018-04-20 Thread Paul Belanger
On Fri, Apr 20, 2018 at 09:07:25AM +0200, Andreas Jaeger wrote:
> On 2018-04-20 01:15, Paul Belanger wrote:
> > Greetings,
> > 
> > I'd like to propose we hard freeze changes to bindep-fallback.txt[1] and 
> > push
> > projects to start using a local bindep.txt file.
> > 
> > This would mean, moving forward with ubuntu-bionic, if a project was still
> > depending on bindep-fallback.txt, their jobs may raise a syntax error.
> > 
> > In fact, today ubuntu-bionic does seem to pass properly with
> > bindep-fallback.txt, but perhaps we prime it with a bad package on purpose 
> > to
> > force the issue. As clarkb points out, the downside to this it does make it
> > harder for projects to be flipped to ubuntu-bionic.  It is possible we could
> > also prime gerrit patches for projects that are missing bindep.txt to help 
> > push
> > this effort along.
> > 
> > Thoughts?
> > 
> > [1] 
> > http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/bindep-fallback.txt
> 
> This might break all stable branches as well. Pushing those changes in
> is a huge effort ;( Is that worth it?
> 
I wouldn't expect stable branches to be running bionic, unless I am missing
something obvious.

> 
> Andreas
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward

2018-04-19 Thread Paul Belanger
Greetings,

I'd like to propose we hard freeze changes to bindep-fallback.txt[1] and push
projects to start using a local bindep.txt file.

This would mean, moving forward with ubuntu-bionic, if a project was still
depending on bindep-fallback.txt, their jobs may raise a syntax error.

In fact, today ubuntu-bionic does seem to pass properly with
bindep-fallback.txt, but perhaps we prime it with a bad package on purpose to
force the issue. As clarkb points out, the downside to this it does make it
harder for projects to be flipped to ubuntu-bionic.  It is possible we could
also prime gerrit patches for projects that are missing bindep.txt to help push
this effort along.

Thoughts?

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/bindep-fallback.txt

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-04-12 Thread Paul Belanger
On Thu, Apr 12, 2018 at 09:00:15AM -0400, Paul Belanger wrote:
> On Mon, Jan 15, 2018 at 01:11:23PM +, Frank Jansen wrote:
> > Hi Ian,
> > 
> > do you have any insight into the availability of a physical environment for 
> > the ARM64 cloud?
> > 
> > I’m curious, as there may be a need for downstream testing, which I would 
> > assume will want to make use of our existing OSP CI framework.
> > 
> The hardware is donated by Linaro and the first cloud is currently located in
> China. As for details of hardware, I recently asked hrw in #openstack-infra 
> and
> this was his reply:
> 
>   hrw | pabelanger: misc aarch64 servers with 32+GB of ram and some GB/TB of 
> storage. different vendors. That's probably the closest to what I can say
>   hrw | pabelanger: some machines may be under NDA, some never reached mass 
> market, some are mass market available, some are no longer mass market 
> available.
> 
> As for downstream testing, are you looking for arm64 hardware or hoping to use
> the Linaro clouds for the testing.
> 
Also, I just noticed this was from Jan 15th, but only just showed up in my
inbox. Sorry for the noise, and will try to look at headers before replying :)

Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-04-12 Thread Paul Belanger
On Mon, Jan 15, 2018 at 01:11:23PM +, Frank Jansen wrote:
> Hi Ian,
> 
> do you have any insight into the availability of a physical environment for 
> the ARM64 cloud?
> 
> I’m curious, as there may be a need for downstream testing, which I would 
> assume will want to make use of our existing OSP CI framework.
> 
The hardware is donated by Linaro and the first cloud is currently located in
China. As for details of hardware, I recently asked hrw in #openstack-infra and
this was his reply:

  hrw | pabelanger: misc aarch64 servers with 32+GB of ram and some GB/TB of 
storage. different vendors. That's probably the closest to what I can say
  hrw | pabelanger: some machines may be under NDA, some never reached mass 
market, some are mass market available, some are no longer mass market 
available.

As for downstream testing, are you looking for arm64 hardware or hoping to use
the Linaro clouds for the testing.

- Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Recap of the Cross Community Infra/CI/CD event before ONS

2018-03-29 Thread Paul Belanger
On Thu, Mar 29, 2018 at 11:11:39AM -0700, Clark Boylan wrote:
> Hello everyone,
> 
> Thought I would give a recap of the Cross Community CI event that Fatih, 
> Melvin, and Robyn hosted prior to the ONS conference this last weekend. As a 
> small disclaimer there was a lot to ingest over a short period of time so 
> apologies if I misremember and get names or projects or topics wrong.
> 
> The event had representatives from OpenStack, Ansible, Linux Foundation, 
> OpenDaylight, OPNFV, ONAP, CNCF, and fd.io (and probably others that I don't 
> remember). The event was largely split into two halves, the first a get to 
> know each project (the community they represent, the tools and methods they 
> use and the challenges they face) and the second working together to reach 
> common understanding on topics such as vocabulary, tooling pracitices, and 
> addressing particular issues that affect many of us. Notes were taken for 
> each day (half) and can be found on mozilla's etherpad [0] [1].
> 
> My biggest takeaway from the event was that while we produce different 
> software we face many of the same challenges performing CI/CD for this 
> software and there is a lot of opportunity for us to work together. In many 
> cases we already use many of the same tools. Gerrit for example is quite 
> popular with the LF projects. In other places we have made distinct choices 
> like Jenkins or Zuul or Gitlab CI, but still have to solve similar issues 
> across these tools like security of job runs and signing of release artifacts.
> 
> I've personally volunteered along with Trevor Bramwell at the LF to sort out 
> some of the common security issues we face running arbitrary code pulled down 
> from the Internet. Another topic that had a lot of interest was building (or 
> consuming some existing if it already exists) message bus to enable machine 
> to machine communication between CI systems. This would help groups like 
> OPNFV which are integrating the output of OpenStack and others to know when 
> there are new things that needs testing and where to get them.
> 
> Basically we previously operated in silos despite significant overlap in 
> tooling and issues we face and since we all work on open source software 
> little prevents us from working together so we should do that more. If this 
> sounds like a good idea and is interesting to you there is a wiki [2] with 
> information on places to collaborate. Currently there are things like a 
> mailing list, freenode IRC channel (other chat tools too if you prefer), and 
> a wiki. Feel free to sign up and get involved. Also I'm happy to give my 
> thoughts on the event if you have further questions.
> 
> [0] https://public.etherpad-mozilla.org/p/infra_cicd_day1
> [1] https://public.etherpad-mozilla.org/p/infra_cicd_day2
> [2] https://gitlab.openci.io/openci/community/wikis/home#collaboration-tools
> 
> Thank you to everyone who helped organize and attended making it a success,
> Clark
> 
Great report,

What was the feedback about continuing these meetings ever 6 / 12 months? Do you
think it was a one off or something that looks to grow into a recurring
event?

I'm interested in the message bus topic myself, it reminds me to rebase some
fedmsg patches :)

Thanks for the report,
Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding new etcd binaries to tarballs.o.o

2018-03-27 Thread Paul Belanger
On Tue, Mar 27, 2018 at 02:04:51PM +, Jeremy Stanley wrote:
> On 2018-03-27 10:39:35 +1100 (+1100), Tony Breeds wrote:
> [...]
> > Except something sets ETCD_DOWNLOAD_URL to tarballs.o.o
> [...]
> 
> I would be remiss if I failed to remind people that the *manually*
> installed etcd release there was supposed to be a one-time stop-gap,
> and we were promised it would be followed shortly with some sort of
> job which made updating it not-manual. We're coming up on a year and
> it looks like people have given in and manually added newer etcd
> releases at least once since. If this file were important to
> testing, I'd have expected someone to find time to take care of it
> so that we don't have to. If that effort has been abandoned by the
> people who originally convinced us to implement this "temporary"
> workaround, we should remove it until it can be supported properly.
> -- 
> Jeremy Stanley

I have to agree with fungi here, I know I raised the point at last PTG in a
meeting about removing this.

This only makes it harder for operators to run etcd in production, if not
packaged propelry. Which, it seems is part of the original issue as 3rd party CI
is manually patching things.


> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol

2018-03-26 Thread Paul Belanger
On Tue, Mar 27, 2018 at 08:56:09AM +1100, Tony Breeds wrote:
> Hi folks,
> Can we ask someone from infra to do this, or add me to bootstrappers
> to do it myself?
> 
Give that we did this last time, I don't see why we can't add you to
boostrappers again.

Will confirm.

-Paul

> On Thu, Mar 15, 2018 at 10:57:58AM +, Jean-Philippe Evrard wrote:
> > Looks good to me.
> > 
> > On 15 March 2018 at 01:11, Tony Breeds  wrote:
> > > On Wed, Mar 14, 2018 at 09:40:33PM +, Jean-Philippe Evrard wrote:
> > >> Hello folks,
> > >>
> > >> The list is almost perfect: you can do all of those except
> > >> openstack/openstack-ansible-tests.
> > >> I'd like to phase out openstack/openstack-ansible-tests and
> > >> openstack/openstack-ansible later.
> > >
> > > Okay excluding the 2 repos above and filtering out projects that don't
> > > have newton branches we came down to:
> > >
> > > # EOL repos belonging to OpenStackAnsible
> > > eol_branch.sh -- stable/newton newton-eol \
> > >  openstack/ansible-hardening \
> > >  openstack/openstack-ansible-apt_package_pinning \
> > >  openstack/openstack-ansible-ceph_client \
> > >  openstack/openstack-ansible-galera_client \
> > >  openstack/openstack-ansible-galera_server \
> > >  openstack/openstack-ansible-haproxy_server \
> > >  openstack/openstack-ansible-lxc_container_create \
> > >  openstack/openstack-ansible-lxc_hosts \
> > >  openstack/openstack-ansible-memcached_server \
> > >  openstack/openstack-ansible-openstack_hosts \
> > >  openstack/openstack-ansible-openstack_openrc \
> > >  openstack/openstack-ansible-ops \
> > >  openstack/openstack-ansible-os_aodh \
> > >  openstack/openstack-ansible-os_ceilometer \
> > >  openstack/openstack-ansible-os_cinder \
> > >  openstack/openstack-ansible-os_glance \
> > >  openstack/openstack-ansible-os_gnocchi \
> > >  openstack/openstack-ansible-os_heat \
> > >  openstack/openstack-ansible-os_horizon \
> > >  openstack/openstack-ansible-os_ironic \
> > >  openstack/openstack-ansible-os_keystone \
> > >  openstack/openstack-ansible-os_magnum \
> > >  openstack/openstack-ansible-os_neutron \
> > >  openstack/openstack-ansible-os_nova \
> > >  openstack/openstack-ansible-os_rally \
> > >  openstack/openstack-ansible-os_sahara \
> > >  openstack/openstack-ansible-os_swift \
> > >  openstack/openstack-ansible-os_tempest \
> > >  openstack/openstack-ansible-pip_install \
> > >  openstack/openstack-ansible-plugins \
> > >  openstack/openstack-ansible-rabbitmq_server \
> > >  openstack/openstack-ansible-repo_build \
> > >  openstack/openstack-ansible-repo_server \
> > >  openstack/openstack-ansible-rsyslog_client \
> > >  openstack/openstack-ansible-rsyslog_server \
> > >  openstack/openstack-ansible-security
> > >
> > > If you confirm I have the list right this time I'll work on this tomorrow
> > >
> > > Yours Tony.
> > >
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Yours Tony.



> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Team dinner at Dublin PTG

2018-02-26 Thread Paul Belanger
On Mon, Feb 19, 2018 at 06:57:29PM -0500, Paul Belanger wrote:
> On Thu, Feb 15, 2018 at 03:17:58PM -0500, Paul Belanger wrote:
> > Greetings,
> > 
> > It is that time again when we all get out from behind our computers and 
> > attempt
> > to be social for the evening. Talking about great subjects like sportsball 
> > and
> > favorite beers.
> > 
> > As usually, please indicate which datetime works better for you by adding 
> > your
> > name and vote to ethercalc[1].
> > 
> > Right now, we are likely going to end up at a pub for drinks and food, if 
> > you
> > have a specific place in mind, please reply.  I'll do my best to find enough
> > room for everybody, however unsure if everybody will sit together at a large
> > table.
> > 
> > [1] https://ethercalc.openstack.org/pqhemnrgnz7t
> 
> Just a reminder to please take a moment to add your name to the team dinner
> list, so far it looks like we'll meet on Monday or Tuesday night.
> 
> Thanks,
> Paul

Greetings everybody!  Hopefully this isnt' too late in the process, but it does
seem Tuesday was the best evening for everybody to meet up.

On Sunday I was at Fagan’s Pub, a short walk from The Croke Park hotel for
drinks and food, and it seems very Irish like.

So, I am proposing after the Official PTG Networking Reception @ 7:30pm we meet
in the lobby of the hotel and walk over and obtain drinks and food. I haven't
requested any sort of reservations, but if we think that is required. I'm happy
to take some time tomorrow morning to confirm we can sit everybody.

Thanks again, and hopefully clarkb won't slap me with a trout on IRC.

Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Team dinner at Dublin PTG

2018-02-19 Thread Paul Belanger
On Thu, Feb 15, 2018 at 03:17:58PM -0500, Paul Belanger wrote:
> Greetings,
> 
> It is that time again when we all get out from behind our computers and 
> attempt
> to be social for the evening. Talking about great subjects like sportsball and
> favorite beers.
> 
> As usually, please indicate which datetime works better for you by adding your
> name and vote to ethercalc[1].
> 
> Right now, we are likely going to end up at a pub for drinks and food, if you
> have a specific place in mind, please reply.  I'll do my best to find enough
> room for everybody, however unsure if everybody will sit together at a large
> table.
> 
> [1] https://ethercalc.openstack.org/pqhemnrgnz7t

Just a reminder to please take a moment to add your name to the team dinner
list, so far it looks like we'll meet on Monday or Tuesday night.

Thanks,
Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [nodepool] Restricting images to specific nodepool builders

2018-02-19 Thread Paul Belanger
On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote:
> Hi,
> 
> On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand  wrote:
> 
> > Hi,
> >
> > How should we go about restricting certain image builds to specific
> > nodepool builder instances?  My immediate issue is with ARM64 image
> > builds, which I only want to happen on a builder hosted in an ARM64
> > cloud.
> >
> > Currently, the builders go through the image list and check "is the
> > existing image missing or too old, if so, build" [1].  Additionally,
> > all builders share a configuration file [2]; so builders don't know
> > "who they are".
> >
> >
> 
> Why not just split the builder configuration file? I don't see a need to
> add code
> to do this.
> 
In our case (openstack-infra) this will require another change to
puppet-nodepool to support this. Not that we cannot, but it will now mean we'll
have 7[1] different nodepool configuration files to now manage. 4 x
nodepool-launchers, 3 x nodepool-builders, since we have 7 services running.

We could update puppet to start templating or add support for nodepool.d (like
zuul.d) and better split our configs too. I just haven't found time to write
that patch.

I did submit support homing diskimage builds to specific builder[2] a while
back, which is more inline with what ianw is asking. This allows us to assign
images to builders, if set.

[1] http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool
[2] https://review.openstack.org/461239/
> 
> 
> 
> > I'd propose we add an arbitrary tag/match system so that builders can
> > pickup only those builds they mark themselves capable of building?
> >
> > e.g. diskimages would specify required builder tags similar to:
> >
> > ---
> > diskimages:
> >   - name: arm64-ubuntu-xenial
> > elements:
> >   - block-device-efi
> >   - vm
> >   - ubuntu-minimal
> >   ...
> > env-vars:
> >   TMPDIR: /opt/dib_tmp
> >   DIB_CHECKSUM: '1'
> >   ...
> > builder-requires:
> >   architecture: arm64
> > ---
> >
> > The nodepool.yaml would grow another section similar:
> >
> > ---
> > builder-provides:
> >   architecture: arm64
> >   something_else_unique_about_this_buidler: true
> > ---
> >
> > For OpenStack, we would template this section in the config file via
> > puppet in [2], ensuring above that only our theoretical ARM64 build
> > machine had that section in it's config.
> >
> > The nodepool-buidler build loop can then check that its
> > builder-provides section has all the tags specified in an image's
> > "builder-requires" section before deciding to start building.
> >
> > Thoughts welcome :)
> >
> > -i
> >
> > [1] https://git.openstack.org/cgit/openstack-infra/nodepool/
> > tree/nodepool/builder.py#n607
> > [2] https://git.openstack.org/cgit/openstack-infra/project-
> > config/tree/nodepool/nodepool.yaml
> >
> > ___
> > Zuul-discuss mailing list
> > zuul-disc...@lists.zuul-ci.org
> > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
> >
> 
> 
> 
> -- 
> David Shrewsbury (Shrews)

> ___
> Zuul-discuss mailing list
> zuul-disc...@lists.zuul-ci.org
> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Team dinner at Dublin PTG

2018-02-15 Thread Paul Belanger
Greetings,

It is that time again when we all get out from behind our computers and attempt
to be social for the evening. Talking about great subjects like sportsball and
favorite beers.

As usually, please indicate which datetime works better for you by adding your
name and vote to ethercalc[1].

Right now, we are likely going to end up at a pub for drinks and food, if you
have a specific place in mind, please reply.  I'll do my best to find enough
room for everybody, however unsure if everybody will sit together at a large
table.

[1] https://ethercalc.openstack.org/pqhemnrgnz7t

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Talks for the Vancouver CFP ?

2018-01-23 Thread Paul Belanger
On Mon, Jan 22, 2018 at 05:12:55PM -0500, David Moreau Simard wrote:
> Hi,
> 
> Did we want to brainstorm around topics and talks suggestions from an
> openstack-infra perspective for Vancouver [1] ?
> 
> The deadline is February 8th and the tracks are the following:
> - CI / CD
> - Container Infrastructure
> - Edge Computing
> - HPC / GPU / AI
> - Open Source Community
> - Private & Hybrid Cloud
> - Public Cloud
> - Telecom & NFV
> 
> CI/CD has Zuul and Nodepool written all over it, of course.
> FWIW I'm already planning on submitting a talk that covers how a
> commit in an upstream project ends up being released by RDO which
> includes the upstream Zuul and RDO's instance of Zuul (amongst other
> things).
> 
> I started an etherpad [2], we can brainstorm there ?
> 
> [1]: https://www.openstack.org/summit/vancouver-2018/call-for-presentations/
> [2]: https://etherpad.openstack.org/p/infra-vancouver-cfp
> 
I'd like to see if we can do the zuulv3 workshop again, I think it went well in
Sydney and being the 2nd time around know of some changes that could be made.
Was likely going to propose that.

Another one, we've done in the past, is to give an overview of what
openstack-infra is. Maybe this time around we can discuss the evolution of
becoming a hosting platform with projects like kata and zuul-ci.org.

-Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Paul Belanger
On Fri, Jan 12, 2018 at 11:17:33AM +0100, Marcin Juszkiewicz wrote:
> Wu dniu 12.01.2018 o 01:09, Ian Wienand pisze:
> > On 01/10/2018 08:41 PM, Gema Gomez wrote:
> >> 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
> >> 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
> >> space.
> > Does this mean you're planning on using diskimage-builder to produce
> > the images to run tests on?  I've seen occasional ARM things come by,
> > but of course diskimage-builder doesn't have CI for it (yet :) so it's
> > status is probably "unknown".
> 
> I had a quick look at diskimage-builder tool.
> 
> It looks to me that you always build MBR based image with one partition.
> This will have to be changed as AArch64 is UEFI based platform (both
> baremetal and VM) so disk needs to use GPT for partitioning and EFI
> System Partition needs to be present (with grub-efi binary on it).
> 
This is often the case when bringing new images online, that some changes to DIB
will be required to support them. I suspect somebody with access to AArch64
hardware will first need to run build-image.sh[1] and paste the build.log. That
will build an image locally for you using our DIB elements.

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/tools/build-image.sh
> I am aware that you like to build disk images on your own but have you
> considered using virt-install with generated preseed/kickstart files? It
> would move several arch related things (like bootloader) to be handled
> by distribution rules instead of handling them again in code.
> 
I don't believe we want to look at using a new tool to build all our images,
switching to virt-install would be a large change. There are reasons why we
build images from scratch and don't believe switching to virt-install help
with that.
> 
> Sent a patch to make it choose proper grub package on aarch64.
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] ze04 & #532575

2018-01-11 Thread Paul Belanger
On Thu, Jan 11, 2018 at 07:58:11AM -0500, David Shrewsbury wrote:
> This is probably mostly my fault since I did not WIP or -2 my change in
> 532575 to keep it
> from getting merged without some infra coordination.
> 
> Because of that change, it is also required that we change the user
> zuul-executor starts
> as from root to zuul [1], and that we also open up the new default finger
> port on the
> executors [2]. Once those are in place, we should be ok to restart the
> executors.
> 
> As for ze04, since that one restarted as the 'root' user, and never dropped
> privileges
> to the 'zuul' user due to 532575, I'm not sure what state it is going to be
> in after applying
> [1] and [2]. Would it create files/directories as root that would now be
> inaccessible if it
> were to restart with the zuul user? Think logs, work dirs, etc...
> 
For permissions, we should likely confirm that puppet-zuul will properly setup
zuul:zuul on the required folders. Then next puppet run we'd be protected.
> 
> -Dave
> 
> 
> [1] https://review.openstack.org/532594
> [2] https://review.openstack.org/532709
> 
> 
> On Wed, Jan 10, 2018 at 11:53 PM, Ian Wienand  wrote:
> 
> > Hi,
> >
> > To avoid you having to pull apart the logs starting ~ [1], we
> > determined that ze04.o.o was externally rebooted at 01:00UTC (there is
> > a rather weird support ticket which you can look at, which is assigned
> > to a rackspace employee but in our queue, saying the host became
> > unresponsive).
> >
> > Unfortunately that left a bunch of jobs orphaned and necessitated a
> > restart of zuul.
> >
> > However, recent changes to not run the executor as root [2] were thus
> > partially rolled out on ze04 as it came up after reboot.  As a
> > consequence when the host came back up the executor was running as
> > root with an invalid finger server.
> >
> > The executor on ze04 has been stopped, and the host placed in the
> > emergency file to avoid it coming back.  There are now some in-flight
> > patches to complete this transition, which will need to be staged a
> > bit more manually.
> >
> > The other executors have been left as is, based on the KISS theory
> > they shouldn't restart and pick up the code until this has been dealt
> > with.
> >
> > Thanks,
> >
> > -i
> >
> >
> > [1] http://eavesdrop.openstack.org/irclogs/%23openstack-
> > infra/%23openstack-infra.2018-01-11.log.html#t2018-01-11T01:09:20
> > [2] https://review.openstack.org/#/c/532575/
> >
> > ___
> > OpenStack-Infra mailing list
> > OpenStack-Infra@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 
> 
> 
> 
> -- 
> David Shrewsbury (Shrews)

> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Hostnames

2018-01-06 Thread Paul Belanger
On Sat, Jan 06, 2018 at 10:21:12AM -0800, Clark Boylan wrote:
> On Sat, Jan 6, 2018, at 10:03 AM, James E. Blair wrote:
> > Hi,
> > 
> > It seems that every time we boot a new server, it either randomly has a
> > hostname of foo, or foo.openstack.org.  And maybe that changes between
> > the first boot and second.
> > 
> > The result of this is that our services which require that they know
> > their hostname (which is a lot, especially the complicated ones) end up
> > randomly working or not.  We waste time repeating the same diagnosis and
> > manual fix each time.
> > 
> > What is the cause of this, and how do we fix this correctly?
> 
> It seems to be an intentional behavior [0] of part of the launch node build 
> process [1]. We could remove the split entirely there and in the hosts and 
> mailnametemplate to use fqdns as hostname to fix it.
> 
> [0] 
> https://git.openstack.org/cgit/openstack-infra/system-config/tree/playbooks/roles/set_hostname/tasks/main.yml#n12
> [1] 
> https://git.openstack.org/cgit/openstack-infra/system-config/tree/launch/launch-node.py#n209
> 
> Clark
> 
We also talked about removing cloud-init, which has been known to modify our
hostnames on reboot.  When I last looked (few months ago) that was the reason
for renames, unsure this time.

I know we also taked about building out own DIBs for control plane servers,
which would move us to glean by default. In the past we discussed using nodepool
to build the images, but didn't want to add passwords for rax into nodepool.o.o.
That would mean a 2nd instance of nodepool, do people think that would work? Or
maybe some sort of periodic job and store credentials in zuul secrets?

PB

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Zuul roadmap

2017-12-12 Thread Paul Belanger
On Tue, Dec 12, 2017 at 05:01:41PM +0100, Matthieu Huin wrote:
> Hello,
> 
> If the getting-started documentation effort is also aimed at end
> users, I'd be happy to help Leif with this: we've written a quick
> start guide for Software Factory explaining how to set up pipelines
> and jobs with Zuul3, and this would probably be better hosted upstream
> with minimal adaptations. Let me know if there's interest for this
> (the storyboard item at
> https://storyboard.openstack.org/#!/story/2001332 doesn't specify
> which kind of doc is expected) and I can submit some patches to the
> documentatoin.
> 
I was just talking with leifmadsen about it this morning and we're going to
organize a working group on docs in the coming days. With holidays coming up
quick, it might be difficult to wrap things up before Christmas.

I know there already has been some discussion between Jim and Leif, plus myself
and Leif documented in the etherpad[1]. Using Fedora and github, I believe the
etherpad notes are correct. So the next steps are reformatting into RST and
tuning the docs.

TL;DR we have some docs, and jobs ran, now to make that a little more user
friendly.

[1] https://etherpad.openstack.org/p/zuulv3-quickstart

> Best regards,
> 
> MHU
> 
> On Fri, Dec 8, 2017 at 9:25 PM, David Shrewsbury
>  wrote:
> > Hi!
> >
> > On Wed, Dec 6, 2017 at 10:34 AM, James E. Blair  wrote:
> >
> > 
> >
> >>
> >> * Add finger gateway
> >>
> >> The fact that the executor must be started as root in order to listen on
> >> port 79 is awkward for new users.  It can be avoided by configuring it
> >> to listen on a different port, but that's also awkward.  In either case,
> >> it's a significant hurdle, and it's one of the most frequently asked
> >> questions in IRC.  The plan to deal with this is to create a new service
> >> solely to multiplex the finger streams.  This is very similar to the
> >> zuul-web service for multiplexing the console streams, so all the
> >> infrastructure is in place.  And of course, running this service will be
> >> optional, so it means that new users don't even have to deal with it to
> >> get up and running, like they do now.  Adding a new service to the 3.0
> >> list should not be done lightly, but the improvement in experience for
> >> new users will be significant.
> >>
> >> David Shrewsbury has started on this.  I don't think it is out of reach.
> >
> >
> >
> > Indeed, it is not out of reach:
> >
> >https://review.openstack.org/525276
> >
> >
> >
> > --
> > David Shrewsbury (Shrews)
> >
> > ___
> > OpenStack-Infra mailing list
> > OpenStack-Infra@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Gate Issues

2017-12-08 Thread Paul Belanger
On Fri, Dec 08, 2017 at 08:38:24PM +1100, Ian Wienand wrote:
> Hello,
> 
> Just to save people reverse-engineering IRC logs...
> 
> At ~04:00UTC frickler called out that things had been sitting in the
> gate for ~17 hours.
> 
> Upon investigation, one of the stuck jobs was a
> legacy-tempest-dsvm-neutron-full job
> (bba5d98bb7b14b99afb539a75ee86a80) as part of
> https://review.openstack.org/475955
> 
> Checking the zuul logs, it had sent that to ze04
> 
>   2017-12-07 15:06:20,962 DEBUG zuul.Pipeline.openstack.gate: Build  bba5d98bb7b14b99afb539a75ee86a80 of   legacy-tempest-dsvm-neutron-full on 
> > started
> 
> However, zuul-executor was not running on ze04.  I believe there were
> issues with this host yesterday.  "/etc/init.d/zuul-executor start" and
> "service zuul-executor start" reported as OK, but didn't actually
> start the daemon.  Rather than debug, I just used
> _SYSTEMCTL_SKIP_REDIRECT=1 and that got it going.  We should look into
> that, I've noticed similar things with zuul-scheduler too.
> 
> At this point, the evidence suggested zuul was waiting for jobs that
> would never return.  Thus I saved the queues, restarted zuul-scheduler
> and re-queued.
> 
> Soon after frickler again noticed that releasenotes jobs were now
> failing with "could not import extension openstackdocstheme" [1].  We
> suspect [2].
> 
> However, the gate did not become healthy.  Upon further investigation,
> the executors are very frequently failing jobs with
> 
>  2017-12-08 06:41:10,412 ERROR zuul.AnsibleJob: [build: 
> 11062f1cca144052afb733813cdb16d8] Exception while executing job
>  Traceback (most recent call last):
>File "/usr/local/lib/python3.5/dist-packages/zuul/executor/server.py", 
> line 588, in execute
>  str(self.job.unique))
>File "/usr/local/lib/python3.5/dist-packages/zuul/executor/server.py", 
> line 702, in _execute
>File "/usr/local/lib/python3.5/dist-packages/zuul/executor/server.py", 
> line 1157, in prepareAnsibleFiles
>File "/usr/local/lib/python3.5/dist-packages/zuul/executor/server.py", 
> line 500, in make_inventory_dict
>  for name in node['name']:
>  TypeError: unhashable type: 'list'
> 
> This is leading to the very high "retry_limit" failures.
> 
> We suspect change [3] as this did some changes in the node area.  I
> did not want to revert this via a force-merge, I unfortunately don't
> have time to do something like apply manually on the host and babysit
> (I did not have time for a short email, so I sent a long one instead :)
> 
> At this point, I sent the alert to warn people the gate is unstable,
> which is about the latest state.
> 
> Good luck,
> 
> -i
> 
> [1] 
> http://logs.openstack.org/95/526595/1/check/build-openstack-releasenotes/f38ccb4/job-output.txt.gz
> [2] https://review.openstack.org/525688
> [3] https://review.openstack.org/521324
> 
Digging into some of the issues this morning, I believe that citycloud-sto2 has
been wedged for some time. I see ready / locked nodes sitting for 2+ days.  We
also have a few ready / locked nodes in rax-iad, which I think are related to
the unhasable list from this morning.

As i understand it, the only way to release these nodes is to stop the
scheduler, is that correct? If so, I'd like to request we add some sort of CLI
--force option to delete, or some other command, if it make sense.

I'll hold off on a restart until jeblair or shrews has a moment to look at logs.

Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Gate Issues

2017-12-08 Thread Paul Belanger
On Fri, Dec 08, 2017 at 08:56:58PM +1100, Ian Wienand wrote:
> On 12/08/2017 08:38 PM, Ian Wienand wrote:
> > However, the gate did not become healthy.  Upon further investigation,
> > the executors are very frequently failing jobs with
> > 
> >   2017-12-08 06:41:10,412 ERROR zuul.AnsibleJob: [build: 
> > 11062f1cca144052afb733813cdb16d8] Exception while executing job
> >   Traceback (most recent call last):
> > File "/usr/local/lib/python3.5/dist-packages/zuul/executor/server.py", 
> > line 588, in execute
> >   str(self.job.unique))
> > File "/usr/local/lib/python3.5/dist-packages/zuul/executor/server.py", 
> > line 702, in _execute
> > File "/usr/local/lib/python3.5/dist-packages/zuul/executor/server.py", 
> > line 1157, in prepareAnsibleFiles
> > File "/usr/local/lib/python3.5/dist-packages/zuul/executor/server.py", 
> > line 500, in make_inventory_dict
> >   for name in node['name']:
> >   TypeError: unhashable type: 'list'
> > 
> > This is leading to the very high "retry_limit" failures.
> > 
> > We suspect change [3] as this did some changes in the node area.
> > [3] https://review.openstack.org/521324
> 
> It was quickly pointed out by frickler that jobs to ze04 were working,
> which made it clear that actually the executors just needed to be
> restarted to pick up these changes too.  I've done that and things are
> looking better.
> 
Thanks and yah, we needed to restart all of zuul for changes in 521324, sorry
for not over communicating that.

> -i
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] plan for Zuul and Nodepool to support Python3.x ?

2017-12-07 Thread Paul Belanger
On Fri, Dec 08, 2017 at 12:38:18AM +0800, Apua A.Aa wrote:
> Hi,
> 
> As title, is there a plan or road map for Zuul and Nodepool to
> support/migrate to Python3.x currently?
> 
Hello,

Currently feature/zuulv3 branches for both nodepool and zuul only support python
3.5+. We expect to be merging them back into master and tagging 3.0 releases in
the coming weeks.

> 
> Apua
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Nodepool drivers

2017-12-07 Thread Paul Belanger
On Thu, Dec 07, 2017 at 09:34:50AM +, Tristan Cacqueray wrote:
> Hi,
> 
> Top posting here to raise another complication.
> James mentioned an API problem regarding the NodeRequestHandler
> interface. Indeed the run_handler method should actually be part of the
> generic code so that the driver's handler only implements the 'launch' method.
> 
> Unfortunately, this is another refactor where we need to move and
> abstract a good chunk of the openstack handler... I worked on a first
> implementation that adds new handler interfaces to address the openstack
> driver needs (such as setting az when a node is reused):
>  https://review.openstack.org/526325
> 
> Well I'm not sure what's the best repartition of roles between the
> handler, the node_launcher and the provider, so feedback would be
> appreciated.
> 
> 
> I also proposed a 'plugin' interface so that driver are fully contained
> in their namespace, which seems like another legitimate addition to this
> feature:
>  https://review.openstack.org/524620
> 
I like the idea of some sort of plugin interface, only to allow for out of tree
drivers to be maintained easier. I found using stevedore to be easy enough when
I hadd to write some openstack plugins in the past, is that something we might
look into reusing here?

> 
> Thanks,
> -Tristan
> 
> 
> On December 2, 2017 1:30 am, Clint Byrum wrote:
> > Excerpts from corvus's message of 2017-12-01 16:08:00 -0800:
> > > Tristan Cacqueray  writes:
> > > 
> > > > Hi,
> > > >
> > > > Now that the zuulv3 release is approaching, please find below a
> > > > follow-up on this spec.
> > > >
> > > > The current code could use one more patch[0] to untangle the common
> > > > config from the openstack provider specific bits. The patch often needs
> > > > to be manualy rebased. Since it looks like a good addition to what
> > > > has already been merged, I think we should consider it for the release.
> > > >
> > > > Then it seems like new drivers are listed as 'future work' on the
> > > > zuul roadmap board, though they are still up for review[1].
> > > > They are fairly self contained and they don't require further
> > > > zuul or nodepool modification, thus they could be easily part of a
> > > > future release indeed.
> > > >
> > > > However I think we should re-evaluate them for the release one more
> > > > time since they enable using zuul without an OpenStack cloud.
> > > > Anyway I remain available to do the legwork.
> > > >
> > > > Regards,
> > > > -Tristan
> > > >
> > > > [0]: https://review.openstack.org/488384
> > > > [1]: https://review.openstack.org/468624
> > > 
> > > I think getting the static driver in to the 3.0 release is reasonable --
> > > most of the work is done, and I think it will make simple or test
> > > deployments of Zuul much easier.  That can make for a better experience
> > > for users trying out Zuul.
> > > 
> > > I'd support moving that to the 3.0 roadmap, but reserving further
> > > drivers for later work.  Thanks!
> > 
> > +1. The static driver has come up a few times in my early experiments
> > and I keep bouncing off of it.
> > 
> > ___
> > OpenStack-Infra mailing list
> > OpenStack-Infra@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra



> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Virtual Sprint Queens

2017-11-30 Thread Paul Belanger
Greetings,

I have created an etherpad[1] with some thoughts about doing a virtual sprint to
upgrade our control plane to Ubuntu Xenial. Please take a moment to read and
indicate when you could assist.

While creating / deleting / upgrading control plane servers does require
infra-root permissions, there is a fair bit of stuff a non infra-root can do to
help. Specifically, we are likely going to need some updates to our puppet
manifests for some servers. This can be done by launching puppet locally
using openstack-infra/system-config on an Ubuntu Xenial VM, confirming it works
or not. If not, pushing those patch into gerrit will be more then welcome!

I've also create 2 groups of servers, ephemeral and long lived. Ephemeral means
we can likey just delete / create the server without any downtime or impact to
the OpenStack project, where our long lived servers are going to require
downtime, DNS updates, etc.

Last time we did this, we managed to knock out a large amount of server upgrades
in a week, I am hoping we'll be able reproduce this!

As always, reply here or #openstack-infra with questions.

-Paul

[1] https://etherpad.openstack.org/p/infra-sprint-xenial-upgrades

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Zuul roadmap

2017-11-21 Thread Paul Belanger
On Wed, Nov 01, 2017 at 02:47:20PM -0700, James E. Blair wrote:
> Hi,
> 
> At the PTG we brainstormed a road map for Zuul once we completed the
> infra cutover.  I think we're in a position now that we can get back to
> thinking about this, so I've (slightly) cleaned it up and organized it
> here.
> 
> I've grouped into a number of sections.  First:
> 
> Very Near Term
> --
> 
> These are things that we should be able to land within just a few weeks
> at most, once we're back from the OpenStack summit and can pay more
> attention to work other than the openstack-infra migration.  All of
> these are already in progress (some are basically finished) and all have
> a primary driver assigned:
> 
> * granular quota support in nodepool (tobias)
> * zuul-web dashboard (tristanC)
> * update private key api for zuul-web (jeblair)
> * github event ingestion via zuul-web (jlk)
> * abstract flag (do not run this job) (jeblair)
> * zuul_json fixes (dmsimard)
> 
> Short Term
> --
> 
> These are things we should be able to do within the weeks or months
> following.  Some have had work start on them already and have a driver
> assigned, others are still up for grabs.  These are things we really
> ought to get done before the v3.0 release because either they involve
> some of the defining features of v3, make it possible to actually deploy
> and run v3, or may involve significant changes for which we don't want
> to have to deal with backwards compatability.
> 
> * refactor config loading (jeblair)
> * protected flag (inherit only within this project) (jeblair)
> * refactor zuul_stream and add testing (mordred)
> * getting-started documentation (leifmadsen)
> * demonstrate openstack-infra reporting on github
I can start working on this one, is there any objections if we use
gtest-org/ansible first?

> * cross-source dependencies
> * add command socket to scheduler and merger for consistent start/stop
I can see about working on this too

> * finish git driver
> * standardize javascript tooling
> 
> -- v3.0 release 
> 
> Yay!  After we release...
> 
> Medium Term
> ---
> 
> Once the initial v3 release is out the door, there are some things that
> we have been planning on for a while and should work on to improve the
> v3 story.  These should be straightforward to implement, but these don't
> need to hold up the release and can easily fit into v3.1.
> 
> * add line comment support to reporters
> * gerrit ci reporting (2.14)
> * add cleanup jobs (jobs that always run even if parents fail)
> * automatic job doc generation
> 
> Long Term / Design
> --
> 
> Some of these are items that we should either discuss a bit further
> before implementing, but most of them probably warrant an proposal in
> infra-specs so we can flesh out the design before we start work.
> 
> * gerrit ingestion via separate process?
> * per-job artifact location
> * need way for admin to trigger a single job (not just a buildset)
> * nodepool backends
> * nodepool label access (tenant/project label restrictions?)
> * nodepool tenant awareness?
> * nodepool rest api alignment?
> * selinux domains
> * fedmesg driver (trigger/reporter)
> * mqtt driver (trigger/reporter)
> * nodepool status ui?
> 
> How does this look?
> 
> -Jim
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] npm Zuul jobs - need help

2017-11-13 Thread Paul Belanger
On Mon, Nov 13, 2017 at 08:00:01PM +0100, Andreas Jaeger wrote:
> On 2017-11-13 19:47, Paul Belanger wrote:
> > On Mon, Nov 13, 2017 at 07:39:08PM +0100, Andreas Jaeger wrote:
> >> Hi team,
> >>
> >> let me summarize the current status and my request for help here.
> >>
> >> Note: with npm I mean the javascript node tests.
> >>
> >> I fixed last week the npm test, lint, and docs jobs and converted them
> >> to native Zuul v3.
> >>
> >> But then we noticed the problem that horizon expects chromium and xvfb
> >> installed and xvfb started for their tests.
> >>
> > Do you have an example log of the failure?  Can't we use the test-setup role
> > and add this into tools/test-setup.sh for now?
> 
> The linked bug contains a log
> https://bugs.launchpad.net/horizon/+bug/1731421
> 
> I didn't look into using. Keep in mind that a few other repos using that
> job would need it as well.
> 
> Btw. I was not happy that my change only works on Ubuntu and installs
> xvfb/chromium using apt.
> 
I'll propose a patch, but think we could either use test-setup role, or move
these OS packages into bindep.txt or both. As long as that happens in a pre-run,
i don't think we need to add them into zuul-jobs.

> Andreas
> 
> > 
> >> This let to a reversal (change https://review.openstack.org/#/c/518881/
> >> ) so that we continue to use the legacy npm-test job. Now the question
> >> is how to fix this properly.
> >>
> >> Akihiro Motoki and myself proposed
> >> https://review.openstack.org/#/c/518879/ and tested that it works in
> >> horizon (https://review.openstack.org/518880). Is this the right
> >> approach? Or is that so OpenStack specific that we need to move it to
> >> openstack-zuul-jobs? I'm also not happy about some changes in there, so
> >> would really appreciate if somebody could take this over and do it the
> >> right way.
> >>
> >> A second problem is that the npm-docs automatic conversion was bogus. It
> >> converted everything to use the sphinx build jobs. I fixed this with
> >> https://review.openstack.org/#/c/518883 for the npm-docs template. Now
> >> the missing piece is the publishing part of it - and then we need to
> >> design templates for the publishing and review the usage of
> >> publish-openstack-sphinx-docs for npm jobs. Have a look at
> >> eslint-config-openstack in project-config/zuul.de/projects.yaml, it uses
> >> the publish-openstack-sphinx-docs template which adds sphinx building
> >> and publishing - we need instead a docs publishing one. I didn't check
> >> how many repos have this broken set up.
> >>
> >> Could anybody tackle these two problems and take over, please? I'm happy
> >> to review and learn - but don't have the energy this week to fix it myself,
> >>
> >> Andreas
> 
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] npm Zuul jobs - need help

2017-11-13 Thread Paul Belanger
On Mon, Nov 13, 2017 at 07:39:08PM +0100, Andreas Jaeger wrote:
> Hi team,
> 
> let me summarize the current status and my request for help here.
> 
> Note: with npm I mean the javascript node tests.
> 
> I fixed last week the npm test, lint, and docs jobs and converted them
> to native Zuul v3.
> 
> But then we noticed the problem that horizon expects chromium and xvfb
> installed and xvfb started for their tests.
> 
Do you have an example log of the failure?  Can't we use the test-setup role
and add this into tools/test-setup.sh for now?

> This let to a reversal (change https://review.openstack.org/#/c/518881/
> ) so that we continue to use the legacy npm-test job. Now the question
> is how to fix this properly.
> 
> Akihiro Motoki and myself proposed
> https://review.openstack.org/#/c/518879/ and tested that it works in
> horizon (https://review.openstack.org/518880). Is this the right
> approach? Or is that so OpenStack specific that we need to move it to
> openstack-zuul-jobs? I'm also not happy about some changes in there, so
> would really appreciate if somebody could take this over and do it the
> right way.
> 
> A second problem is that the npm-docs automatic conversion was bogus. It
> converted everything to use the sphinx build jobs. I fixed this with
> https://review.openstack.org/#/c/518883 for the npm-docs template. Now
> the missing piece is the publishing part of it - and then we need to
> design templates for the publishing and review the usage of
> publish-openstack-sphinx-docs for npm jobs. Have a look at
> eslint-config-openstack in project-config/zuul.de/projects.yaml, it uses
> the publish-openstack-sphinx-docs template which adds sphinx building
> and publishing - we need instead a docs publishing one. I didn't check
> how many repos have this broken set up.
> 
> Could anybody tackle these two problems and take over, please? I'm happy
> to review and learn - but don't have the energy this week to fix it myself,
> 
> Andreas
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Nominating new project-config and zuul job cores

2017-10-13 Thread Paul Belanger
On Fri, Oct 13, 2017 at 09:25:52AM -0700, Clark Boylan wrote:
> Hello everyone,
> 
> I'd like to nominate a few people to be core on our job related config
> repos. Dmsimard, mnaser, and jlk have been doing some great reviews
> particularly around the Zuul v3 transition. In recognition of this work
> I propose that we give them even more responsibility and make them all
> cores on project-config, openstack-zuul-jobs, and zuul-jobs.
> 
> Please chime in with your feedback.
> 
> Thank you (especially for all the code reviews),
> Clark
> 
+1

Thank you for helping out!

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] First member of networking-lagopus gerrit group

2017-09-21 Thread Paul Belanger
On Thu, Sep 21, 2017 at 10:17:20AM +0900, Hirofumi Ichihara wrote:
> Hi Infra team,
> 
> I proposed networking-lagopus project[1] and then it looks like the project
> was created[2, 3].
> Could you add me into the groups?
> 
> [1]: https://review.openstack.org/#/c/501730/
> [2]: https://review.openstack.org/#/admin/groups/1837,members
> [3]: https://review.openstack.org/#/admin/groups/1838,members
> 
> Thanks,
> Hirofumi
> 
Done!

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [incident] OVH-BHS1 mirror disappeared

2017-09-21 Thread Paul Belanger
On Thu, Sep 21, 2017 at 12:58:58PM +1000, Ian Wienand wrote:
> 
> At around Sep 21 02:30UTC mirror01.bhs1.ovh.openstack.org became
> uncontactable and jobs in the region started to fail.
> 
> The server was in an ACTIVE state but uncontactable.  I attempted to
> get a console but either a log or url request returned 500 (request
> id's below if it helps).
> 
>  ... console url show ...
> The server has either erred or is incapable of performing the requested 
> operation. (HTTP 500) (Request-ID: req-5da4cba2-efe8-4dfb-a8a7-faf490075c89)
>  ...  console log show ...
> The server has either erred or is incapable of performing the requested 
> operation. (HTTP 500) (Request-ID: req-80beb593-b565-42eb-8a97-b2a208e3d865)
> 
> I could not figure out how to log into the web console with our
> credentials.
> 
> I attempted to hard-reboot it, and it currently appears stuck in
> HARD_REBOOT.  Thus I have placed nodepool.o.o in the emergency file
> and set max-servers for the ovh-bhs1 region to 0
> 
> I have left it at this, as hopefully it will be beneficial for both
> OVH and us to diagnose the issue since the host was definitely not
> expected to disappear.  After this we can restore or rebuild it as
> required.
> 
> Thanks,
> 
http://mirror01.bhs1.ovh.openstack.org/ appears to be back online, I'll confirm
if we want to remove nodepool from the emergency file. And try to reach out to
OVH.

> -i
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Infra Team PTG Dinner

2017-09-04 Thread Paul Belanger
On Sun, Sep 03, 2017 at 04:26:19PM -0700, Clark Boylan wrote:
> I'm not sure we need a doodle. Based on the etherpad Tuesday and
> Wednesday are the only two evenings everyone is in town and of those two
> Tuesday is the only day that hasn't already been listed as a conflict.
> 
> Can we go with Tuesday at the Lowry Beer Garden? I think there is a
> happy hour thing happening at the PTG that evening so we could meetup
> after that and shuttle over to the beer garden (looks like shouldn't be
> more than a 10-15 minute car ride).
> 
> Does this work?
> Clark
> 
Works for me.

> On Sun, Sep 3, 2017, at 04:03 PM, David Moreau Simard wrote:
> > Do we want to set up a scheduling doodle [1] or something to get the
> > best possible time ?
> > 
> > As the PTG draws near, different events are being set up.
> > There's the RDO release party wednesday [2] (everyone is invited!) and
> > I'm trying to juggle tuesday right now.
> > 
> > [1]: http://doodle.com/
> > [2]: https://www.redhat.com/archives/rdo-list/2017-August/msg00057.html
> > 
> > 
> > David Moreau Simard
> > Senior Software Engineer | OpenStack RDO
> > 
> > dmsimard = [irc, github, twitter]
> > 
> > 
> > On Wed, Aug 23, 2017 at 5:21 PM, Clark Boylan 
> > wrote:
> > > Hello,
> > >
> > > I brought this up briefly in yesterday's meeting, but are we interested
> > > in having a team dinner while at the PTG? I expect we are so I've tried
> > > to start putting information together at
> > > https://etherpad.openstack.org/p/infra-ptg-team-dinner.
> > >
> > > The biggest issue is there does not appear to be a lot of options near
> > > the PTG hotel. That said, there appears to be a good beer garden option
> > > about 4 miles away. Beer gardens were a hit in Darmstadt because they
> > > can accomodate large groups, we don't have to pre negotiate reservations
> > > or who is paying, and of course because beer.
> > >
> > > If interested please add your name to the list on the etherpad and the
> > > evenings you are available for the dinner. Also, if anyone else has
> > > better ideas (seriously I'm not great at this) or wants to organize go
> > > ahead and edit the etherpad and/or let me know.
> > >
> > > Thanks,
> > > Clark
> > >
> > > ___
> > > OpenStack-Infra mailing list
> > > OpenStack-Infra@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Infra Team PTG Dinner

2017-08-23 Thread Paul Belanger
On Wed, Aug 23, 2017 at 02:21:07PM -0700, Clark Boylan wrote:
> Hello,
> 
> I brought this up briefly in yesterday's meeting, but are we interested
> in having a team dinner while at the PTG? I expect we are so I've tried
> to start putting information together at
> https://etherpad.openstack.org/p/infra-ptg-team-dinner.
> 
> The biggest issue is there does not appear to be a lot of options near
> the PTG hotel. That said, there appears to be a good beer garden option
> about 4 miles away. Beer gardens were a hit in Darmstadt because they
> can accomodate large groups, we don't have to pre negotiate reservations
> or who is paying, and of course because beer.
> 
> If interested please add your name to the list on the etherpad and the
> evenings you are available for the dinner. Also, if anyone else has
> better ideas (seriously I'm not great at this) or wants to organize go
> ahead and edit the etherpad and/or let me know.
> 
As somebody who did Boston, I'm okay with whatever you choose. :)

> Thanks,
> Clark
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] citycloud lon1 mirror postmortem

2017-08-10 Thread Paul Belanger
On Thu, Aug 10, 2017 at 10:34:56PM +1000, Ian Wienand wrote:
> Hi,
> 
> In response to sdague reporting that citycloud jobs were timing out, I
> investigated the mirror, suspecting it was not providing data fast enough.
> 
> There were some 170 htcacheclean jobs running, and the host had a load
> over 100.  I killed all these, but performance was still unacceptable.
> 
> I suspected networking, but since the host was in such a bad state I
> decided to reboot it.  Unfortunately it would get an address from DHCP
> but seemed to have DNS issues ... eventually it would ping but nothing
> else was working.
> 
> nodepool.o.o was placed in the emergency file and I removed lon1 to
> avoid jobs going there.
> 
> I used the citycloud live chat, and Kim helpfully investigated and
> ended up migrating mirror.lon1.citycloud.openstack.org to a new
> compute node.  This appeared to fix things, for us at least.
> 
> nodepool.o.o is removed from the emergency file and original config
> restored.
> 
> With hindsight, clearly the excessive htcacheclean processes were due
> to negative feedback of slow processes due to the network/dns issues
> all starting to bunch up over time.  However, I still think we could
> minimise further issues running it under a lock [1].  Other than that,
> not sure there is much else we can do, I think this was largely an
> upstream issue.
> 
> Cheers,
> 
> -i
> 
> [1] https://review.openstack.org/#/c/492481/
> 
Thanks, I also noticed a job fail to download a package from
mirror.iad.rax.openstack.org. When I SSH'd to the server I too see high load
(6.0+) and multiple htcacheclean processes running. 

I did an audit on the other mirrors and they too had the same, so I killed all
the processes there.  I can confirm the lock patch merged but will keep an eye
on it.

I did notice that mirror.lon1.citycloud.openstack.org wass still slow to react
to shell commands. I still think we have an IO bottleneck some where, possible
the compute host is throttling something.  We should keep an eye on it.

-PB

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] MeetBot down on #openstack-meeting-4

2017-08-09 Thread Paul Belanger
On Wed, Aug 09, 2017 at 01:24:52PM +, Afek, Ifat (Nokia - IL/Kfar Sava) 
wrote:
> Hi,
> 
> I tried to open a Vitrage meeting on #openstack-meeting-4, but MeetBot was 
> down.
> Attached is the meeting log. Please place it under eavesdrop, so it won’t get 
> lost.
> 
> Thanks,
> Ifat.
> 
Thanks, fixed. Our bot was on the wrong side of a netsplit.

> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Zuul v3 Job migration - input needed

2017-08-08 Thread Paul Belanger
On Tue, Aug 08, 2017 at 09:57:24AM -0500, Monty Taylor wrote:
> Heya,
> 
> I started working on the Zuul Migration script - largely because we're
> starting to have more mappings of old job to new job than I can keep in my
> head and I wanted to be able to write them down as we make them.
> 
> https://review.openstack.org/#/q/topic:zuul-v3-migration
> 
> I doesn't do much yet - but I did add a zuul-v3 job that runs the migration
> on project-config and collects the results so we should be able to examine
> that as we work on it.
> 
> I've run in to a conceptual issue already that I'd like feedback on.
> 
> In dealing with project-templates, we have a bit of an issue. Take this
> snippet from layout.yaml:
> 
> project-templates:
>   - name: loci-jobs
> check:
>   - 'gate-{name}-ubuntu-xenial'
> gate:
>   - 'gate-{name}-ubuntu-xenial'
> 
> We have two choices of how we can do transforms of things we don't
> explicitly map. We can expand them all and make completely equivilent jobs,
> so that we'd generate the following jobs:
> 
> - job:
> name: gate-loci-cinder
> 
> - job:
> name: gate-loci-glance
> 
> If we do that, then we can't really migrate the templates and would have to
> put expanded templates into the resulting project pipeline config, so we'd
> have:
> 
> - project:
> name: openstack/loci-cinder
> check:
>   - gate-loci-cinder
> gate:
>   - gate-loci-cinder
> 
> this is the safest - we KNOW that we can do this migration today and the
> jobs will all work as today - but the resulting config will be uglier. This
> is also a data loss as we'll lose ALL of our current templates.
> 
> We could also depend on the mapping construct and define a mapping for these
> loci jobs:
> 
> - old: gate-{name}-ubuntu-xenial
>   name: test-loci
> 
> in which case we could make:
> 
> - job:
> name: test-loci
> 
> - project-template:
> name: loci-jobs
> check:
>   - test-loci
> gate:
>   - test-loci
> 
> - project:
> name: openstack/loci-cinder
> templates:
>   - loci-jobs
> 
> But doing this will require a much more careful audit of our output - we
> cannot be sure that cases we haven't looked at will work and we essentially
> need to hand-verify every project-template to make sure the jobs it contains
> don't have edge-conditions we need to map.
> 
> As a third option - we could auto-expand {name} in project-templates to the
> name of the project template and generate jobs that way:
> 
> - job:
> name: gate-loci-jobs
> 
> - project-template:
> name: loci-jobs
> check:
>   - gate-loci-jobs
> gate:
>   - gate-loci-jobs
> 
> - project:
> name: openstack/loci-cinder
> templates:
>   - loci-jobs
> 
> This is automatic and doesn't have data loss - but might be a little
> confusing. It also means for some of our templates we'll have duplicate jobs
> we don't need. {name}-tarball, for instance, shows up in many template. Now
> - we'll have a mapping for {name}-tarball - but for jobs that we don't have
> explicit mappings for and that do show up in multiple templates it might be
> weird.

I am fine with option 3, I don't expect it to be 100% perfect but should give us
a good base to start refactoring things. I see this as a 1 time cost, that once
import for openstack-infra, we can start work on reducing duplicate data.



> 
> Thoughts?
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Delete a bad branch from openstack/ara

2017-08-04 Thread Paul Belanger
On Thu, Aug 03, 2017 at 11:32:09PM -0400, David Moreau Simard wrote:
> Hi,
> 
> I've been meaning to create a feature branch and ended up creating
> "1.0" before I realized that "feature/1.0" was much more appropriate.
> 
> Could you please delete "1.0" ?
> 
> Thanks (and sorry)!
> 
Done.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Zuulv3 discussion on pbx.o.o

2017-08-02 Thread Paul Belanger
Greetings,

We had a voice discussion on pbx.o.o today, wanted to give a summary for people
following at home.

jeblair, mordred, fungi and myself held it on sip:6...@pbx.openstack.org.

The initial question posed by myself was how to best organize our tarballs.o.o
publisher for tox jobs. This came as a result of me working on the new zuulv3
jobs to handle this.

The main issue was because we need to add_host (ansible) for tarballs.o.o within
a playbook / role, where would this best be done.  Adding it to our base job
didn't make much sense, since not all jobs need access to tarballs.o.o.

After a quick discussion, it was decided we'd create a new job (trusted 
playbook)
in project-config called 'tox-publisher' (name open to bike shed), that would
parent to tox-tarball.

This then evolved into if we actually need to upload a tarball to tarballs.o.o,
before then publishing to pypi, eventually we agreed it is still useful because
of master branch tarballs.

We then realized we needed to GPG sign our tarball files, but quickly found our
length of 4096 bits limit [1] would be an issue for our GPG sub keys. jeblair
propose we revisit and implement our solution for the 4096 limitation[2].
However mordred / fungi believe breaking our secret up into 4096 chucks and
creating a secret object / list would also allow us to work around the limit.

It was proposed that zuulv3 would already know how to read this list of secrets
and concat them together a job run time creating the whole secret. We agree this
might be worth doing.

This is very high level of the discussion we had.

[1] 
https://docs.openstack.org/infra/zuul/feature/zuulv3/user/encryption.html#encryption
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-March/114398.html

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Zuul v3: Giant pile of thoughts around zuul-web things

2017-07-26 Thread Paul Belanger
On Wed, Jul 26, 2017 at 04:28:55PM -0400, David Moreau Simard wrote:
> Hi,
> 
> A bit out of my comfort zone for the content of your giant pile of
> thoughts, but I'll jump on this bit:
> 
> > We have a desire to add something that knows how to
> > render the job-output.json in some nice ways.
> 
> The next things on my plate for ARA are:
> - Introduce the concept of API to abstract model access
> - Input drivers based on the API (callback, zuul json, mqtt, etc.)
> - Output drivers based on the API (html, junit, subunit, influx, graphite, 
> etc.)
> 
> This is no small task but I'm hoping to have made significant progress
> by the time the PTG rolls around.
> 
> Paul showed interest in having those job-output.json files imported into ARA.
> I could see this taking a similar approach than how we are currently
> importing subunit files into openstack-health.
> 
Right, this is how stackviz works today. You export subunit data into json (I
think) and a static HTML site for stackviz will load it. The thing I like about
that is it would require no additional callbacks to be installed into zuul.  

I think the ansible callback for ARA makes sense for a long lived ARA service,
but to see single job runs, maybe the stackviz approach with will using
job-output.txt.

> I'll let you know if I have anything to show !
> 
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
> 
> dmsimard = [irc, github, twitter]
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] About aarch64 third party CI

2017-06-09 Thread Paul Belanger
On Fri, Jun 09, 2017 at 07:58:44PM +, Jeremy Stanley wrote:
> On 2017-06-07 14:26:10 +0800 (+0800), Xinliang Liu wrote:
> [...]
> > we already have our own pre-built debian cloud image, could I just
> > use it and not use the one built by diskimage-builder?
> [...]
> 
> The short answer is that nodepool doesn't currently have support for
> directly using an image provided independent of its own image build
> process. Clark was suggesting[*] in IRC today that it might be
> possible to inject records into Zookeeper (acting as a "fake"
> nodepool-builder daemon basically) to accomplish this, but nobody
> has yet implemented such a solution to our knowledge.
> 
> Longer term, I think we do want a feature in nodepool to be able to
> specify the ID of a prebuilt image for a label/provider (at least we
> discussed that we wouldn't reject the idea if someone proposed a
> suitable implementation). Just be aware that nodepool's use of
> diskimage-builder to regularly rebuild images is intentional and
> useful since it ensures images are updated with the latest packages,
> kernels, warm caches and whatever else you specify in your elements
> so reducing job runtimes as they spend less effort updating these
> things on every run.
> 
> [*]  http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-06-09.log.html#t2017-06-09T15:32:27-2
>  >
> -- 
> Jeremy Stanley

Actually, I think 458073[1] aims to fix this use case.  I haven't tired it
myself but it adds support for using images which are not built and managed by
nodepool.

This is currently only on feature/zuulv3 branch.

[1] https://review.openstack.org/#/c/458073/

> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] On the subject of HTTP interfaces and Zuul

2017-06-09 Thread Paul Belanger
On Fri, Jun 09, 2017 at 12:35:59PM -0700, Clark Boylan wrote:
> On Fri, Jun 9, 2017, at 09:22 AM, Monty Taylor wrote:
> > Hey all!
> > 
> > Tristan has recently pushed up some patches related to providing a Web 
> > Dashboard for Zuul. We have a web app for nodepool. We already have the 
> > Github webhook receiver which is inbound http. There have been folks who 
> > have expressed interest in adding active-REST abilities for performing 
> > actions. AND we have the new websocket-based log streaming.
> > 
> > We're currently using Paste for HTTP serving (which is effectively 
> > dead), autobahn for websockets and WebOB for request/response processing.
> > 
> > This means that before we get too far down the road, it's probably time 
> > to pick how we're going to do those things in general. There are 2 
> > questions on the table:
> > 
> > * HTTP serving
> > * REST framework
> > 
> > They may or may not be related, and one of the options on the table 
> > implies an answer for both. I'm going to start with the answer I think 
> > we should pick:
> > 
> > *** tl;dr ***
> > 
> > We should use aiohttp with no extra REST framework.
> > 
> > Meaning:
> > 
> > - aiohttp serving REST and websocket streaming in a scale-out tier
> > - talking RPC to the scheduler over gear or zk
> > - possible in-process aiohttp endpoints for k8s style health endpoints
> > 
> > Since we're talking about a web scale-out tier that we should just have 
> > a single web tier for zuul and nodepool. This continues the thinking 
> > that nodepool is a component of Zuul.
> 
> I'm not sure that this is a great idea. We've already seen that people
> have wanted to use nodepool without a Zuul and even without performing
> CI. IIRC paul wanted to use it to keep a set of asterisks floating
> around for example. We've also seen that people want to use
> subcomponents of nodepool to build and manage a set of images for clouds
> without making instances.
> 
Ya, asterisk use case aside, I think image build as a service is a prime example
of something nodepool could be great at on its own.  Especially now that
nodepool-builder is scaling out very well with zookeeper.

> In the past we have been careful to keep logical tools separate which
> has made it easy for us to add new tools and remove old ones.
> Operationally this may be perceived as making things more difficult to a
> newcomer, but it makes life much much better 3-6 months down the road.
> 
> > 
> > In order to write zuul jobs, end-users must know what node labels are 
> > available. A zuul client that says "please get me a list of available 
> > node labels" could make sense to a user. As we get more non-OpenStack 
> > users, those people may not have any concept that there is a separate 
> > thing called "nodepool".
> > 
> > *** The MUCH more verbose version ***
> > 
> > I'm now going to outline all of the thoughts and options I've had or 
> > have heard other people say. It's an extra complete list - there are 
> > ideas in here you might find silly/bad. But since we're picking a 
> > direction, I think it's important we consider the options in front of us.
> > 
> > This will cover 3 http serving options:
> > 
> > - WSGI
> > - aiohttp
> > - gRPC
> > 
> > and 3 REST framework options:
> > 
> > - pecan
> > - flask-restplus
> > - apistar
> > 
> > ** HTTP Serving **
> > 
> > WSGI
> > 
> > The WSGI approach is one we're all familiar with and it works with 
> > pretty much every existing Python REST framework. For us I believe if we 
> > go this route we'd want to serve it with something like uwsgi and 
> > Apache. That adds the need for an Apache layer and/or management uwsgi 
> > process. However, it means we can make use of normal tools we all likely 
> > know at least to some degree.
> 
> FWIW I don't think Apache would be required. uWSGI is a fairly capable
> http server aiui. You can also pip install uwsgi so the simple case
> remains fairly simple I think.
> 
> > 
> > A downside is that we'll need to continue to handle our Websockets work 
> > independently (which is what we're doing now)
> > 
> > Because it's in a separate process, the API tier will need to make 
> > requests of the scheduler over a bus, which could be either gearman or
> > zk.
> > 
> 
> Note that OpenStack has decided that this is a better solution than
> using web servers in the python process. That doesn't necessarily mean
> it is the best choice for Zuul, but it seems like there is a lot we can
> learn from the choice to switch to WSGI in OpenStack.
> 
> > aiohttp
> > 
> > Zuul v3 is Python3, which means we can use aiohttp. aiohttp isn't 
> > particularly compatible with the REST frameworks, but it has built-in 
> > route support and helpers for receiving and returning JSON. We don't 
> > need ORM mapping support, so the only thing we'd really be MISSING from 
> > REST frameworks is auto-generated documentation.
> > 
> > aiohttp also supports websockets directly, so we could port the autobahn 
> > work to use aioh

Re: [OpenStack-Infra] Zuul v3: proposed new Depends-On syntax

2017-06-01 Thread Paul Belanger
On Wed, May 24, 2017 at 04:04:20PM -0700, James E. Blair wrote:
> Hi,
> 
> As part of Zuul v3, we're adding support for GitHub (and later possibly
> other systems).  We want these systems to have access to the full power
> of cross-project-dependencies in the same way as Gerrit.  However, the
> current syntax for the Depends-On footer is currently the
> Gerrit-specific change-id.
> 
> We chose this in an attempt to be future-compatible with some proposed
> changes to Gerrit itself to support cross-project dependencies.  Since
> then, Gerrit has gone in a different direction on this subject, so I no
> longer think we should weigh that very heavily.
> 
> While Gerrit change ids can be used to identify one or more changes
> within a Gerrit installation, there is no comparable identifier on
> GitHub, as pull request numbers are unique only within a project.
> 
> The natural way to identify a GitHub pull request is with its URL.
> 
> This can be used to identify Gerrit changes as well, and will likely be
> well supported by other systems.  Therefore, I propose we support URLs
> as the content of the Depends-On footers for all systems.  E.g.:
> 
>   Depends-On: https://review.openstack.org/12345
>   Depends-On: https://github.com/ansible/ansible/pull/12345
> 
Hopefully not to off-topic, would it also be possible to support the reverse of
this?  I know we've unofficially used the Needed-By footer for some governance
patches. It has been helpful when looking at git logs to see the other direction
dependency from time to time.

Not a big deal if it is a no, just something that popped into my head when
reading this topic.

> Similarly to the Gerrit change IDs, these identifiers are easily
> navigable within Gerrit (and Gertty), so that reviewers can traverse the
> dependency chain easily.
> 
> One substantial aspect of this change is that it is more specific about
> projects and branches.  A single Gerrit change ID can refer to more than
> one branch, and even more than one project.  Zuul interprets this as
> "this change depends on *all* of the changes that match".  Often times
> that is convenient, but sometimes it is not.  Frequently users ask "how
> can I make this depend only on a change to master, not the backport of
> the change to stable?" and the answer is, "you can't".
> 
> URLs have the advantage of allowing users to be specific as to which
> instances of a given change are actually required.  If, indeed, a change
> depends on more than one, of course a user can still add multiple
> Depends-On headers, one for each.
> 
> It is also easy for Zuul connections to determine whether a given URL is
> referring to a change on that system without actually needing to query
> it.  A Zuul connected to several code review systems can easy determine
> which to ask for the change by examining the hostname.
> 
> URLs do have two disadvantages compared to Gerrit change IDs: they can
> not be generated ahead of time, and they are not as easily found in
> offline git history.
> 
> With Gerrit change IDs, we can write several local changes, and before
> pushing them to Gerrit, add Depends-On headers since the change id is
> generated locally.  URLs are not known until the changes are pushed to
> Gerrit (or GitHub pull requests opened).  So in some cases, editing of
> an already existing commit message may be required.  However, the most
> common case of a simple dependency chain can still be easily created by
> pushing one change up at a time.
> 
> Change IDs, by virtue of being in the commit message of the dependent as
> well as depending change, become part of the permanent history of the
> project, no longer tied to the code review system, once they merge.
> This is an important thing to consider for long-running projects.  URLs
> are less suitable for this, since they acquire their context from
> contemporaneous servers.  However, Gerrit does record the review URL in
> git notes, so while it's not as convenient, with some additional tooling
> it should be possible to follow dependency paths with only the git
> history.
> 
> Of course, this is not a change we can make instantaneously -- the
> change IDs have a lot of inertia and developer muscle memory.  And we
> don't want changes that have been in progress for a while to suddenly be
> broken with the switch to v3.  So we will need to support both syntaxes
> for some time.
> 
> We could, indeed, support both syntaxes indefinitely, but I believe it
> would be better to plan on deprecating the Gerrit change ID syntax with
> an eye to eventually removing it.  I think that ultimately, the URL
> syntax for Depends-On is more intuitive to a new user, especially one
> that may end up being exposed to a Zuul which connects to multiple
> systems.  Having a Gerrit change depend on a GitHub pull request (and
> vice versa) will be one of the most powerful features of Zuul v3, and
> the syntax for that should be approachable.
> 
> In short, I think the value of consistency across mu

[OpenStack-Infra] puppet-pip breakage for systems

2017-06-01 Thread Paul Belanger
Puppet users,

Last night, I hastily approved 469559[1] which ended up doing some damage to our
production servers.  The symlink logic was not correct and what ended up
happening was python3 pip was downloaded, and installed, followed by our symlink
command.  EG:

  1 - We ran get-pip.py under python3
  2 - This create pip, pip3, pip3.x for python3
  3 - pip2 was symlinked to pip (making it python3 also)

This meant, any existing pip installs that were python2 based were incorrectly
made python3.

  pip(python3), pip2(symlink python3), pip3 (new python3)

We posted 469851[2] this morning to undo the symlinking and correctly reinstall
pip as python2.  However, during that time, any puppet task that used pip could
have attempted to install the package using python3.

It is recommend you audit your servers, specifically 3rd party CI, to see if
there was any issues during this time period.  We created an etherpad[3] for
openstack-infra to track the failures, is has some example commands on how you
can help to audit your own server.

Apologies for the troubles today, I should have been more careful in reading the
initial patch.

[1] https://review.openstack.org/#/c/469559/
[2] https://review.openstack.org/#/c/469851/
[3] https://etherpad.openstack.org/p/infra-pip-symlink-failure

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Nodepool drivers

2017-05-29 Thread Paul Belanger
On Mon, May 29, 2017 at 02:39:16AM +, Tristan Cacqueray wrote:
> Hi,
> 
> With the nodepool-drivers[0] spec approved, I started to hack a quick
> implementation[1]. Well I am not very familiar with the nodepool/zookeeper
> architecture, thus this implementation may very well be missing important
> bits... The primary goal is to be able to run ZuulV3 with static nodes,
> comments and feedbacks are most welcome.
> 
> Moreover, assuming this isn't too off-track, I'd like to propose an
> OpenContainer and a libvirt driver to diversify Test environment.
> 
I know in the past we talked about using kubernetes for this. But, that might be
a large dependency for testing environments.  A quick peak at the code makes me
wonder if we maybe shouldn't just turn the interface into an Ansible driver.
This gets the contain logic outside of nodepool and by using ansible, makes it
generic enough to use any container / serverless / something.

It could be possible, there is where something like linchpin comes in handy too.

> Thanks in advance,
> -Tristan
> 
> [0]: 
> http://specs.openstack.org/openstack-infra/infra-specs/specs/nodepool-drivers.html
> [1]: https://review.openstack.org/#/q/topic:nodepool-drivers



> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Some error occurred when I build the image.

2017-05-22 Thread Paul Belanger
On Mon, May 22, 2017 at 07:03:28AM +, linziw...@itri.org.tw wrote:
> Dear all,
> 
> I have the following error messages when I build the image for establishing 
> the OpenStack 3rd  CI.
> 
> 1)
> 2017-05-22 13:03:49,806 INFO nodepool.image.build.dpc: dib-run-parts Mon May 
> 22 05:03:49 UTC 2017 Running /tmp/in_target.d/install.d/95-chown-jenkins
> 2017-05-22 13:03:49,812 INFO nodepool.image.build.dpc: + chown -R 
> jenkins:jenkins /home/jenkins
> 2017-05-22 13:03:49,814 INFO nodepool.image.build.dpc: chown: invalid user: 
> 'jenkins:jenkins'
> 
> 2)
> After I clone the latest version of the 
> openstack-infra/project-config/nodepool/elements and script folders,
> I execute “nodepool image-build dpc” command and the following messages 
> display immediately.
> 
> 2017-05-22 14:19:35,719 INFO nodepool.image.build.dpc: raise 
> MissingElementException("Element '%s' not found" % element)
> 2017-05-22 14:19:35,719 INFO nodepool.image.build.dpc: 
> diskimage_builder.element_dependencies.MissingElementException: Element 
> 'puppet' not found.
> 
> Could anyone tell me how to fix this?
> 
We recently removed puppet from our diskimages, as a results you'll need to
updating from openstack-infra/project-config to get the latest nodepool
elements. This will case your images to be built again, but without installing
puppet first.

-PB

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Boston 2017 Summit dinner

2017-05-09 Thread Paul Belanger
On Thu, Apr 27, 2017 at 08:47:58PM -0400, Paul Belanger wrote:
> Greetings!
> 
> Its that time where we all try to figure out when and where to meet up for 
> some
> dinner and drinks in Boston. While I haven't figure out a place to eat
> (suggestion most welcome), maybe we can decide which night to go out.
> 
> As a reminder, the summit schedule has 2 events this year that people may also
> be attending:
> 
>   Mon 8, 6:00pm - 7:30pm - Marketplace Mixer
>   Tue 9, 7:00pm - 10:00pm - StackCity Boston at Fenway Park
> 
> Please take a moment to reply, and which day may be better for you.
> 
>   Sunday: Yes
>   Monday: Yes
>   Tuesday: No
>   Wednesday: Yes
>   Thursday: No
> 
> And, if you have a resturant in mind, please share.
> 
Thanks to everybody who turned out last night. Apologies we had to split off the
some people from the main table.  Hopefully everybody still had an awesome time!

-PB

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Boston 2017 Summit dinner

2017-05-04 Thread Paul Belanger
On Thu, May 04, 2017 at 10:45:36AM -0400, Paul Belanger wrote:
> On Thu, Apr 27, 2017 at 08:47:58PM -0400, Paul Belanger wrote:
> > Greetings!
> > 
> > Its that time where we all try to figure out when and where to meet up for 
> > some
> > dinner and drinks in Boston. While I haven't figure out a place to eat
> > (suggestion most welcome), maybe we can decide which night to go out.
> > 
> > As a reminder, the summit schedule has 2 events this year that people may 
> > also
> > be attending:
> > 
> >   Mon 8, 6:00pm - 7:30pm - Marketplace Mixer
> >   Tue 9, 7:00pm - 10:00pm - StackCity Boston at Fenway Park
> > 
> > Please take a moment to reply, and which day may be better for you.
> > 
> >   Sunday: Yes
> >   Monday: Yes
> >   Tuesday: No
> >   Wednesday: Yes
> >   Thursday: No
> > 
> > And, if you have a resturant in mind, please share.
> > 
> Looks like Sunday might be our best day? Is there any objection on maybe 
> having
> some early dinner and drinks that day?
> 
> Since nobody has suggested a location, I am going to attempt reservations at
> http://thesaltypig.com/ @ 5pm.
> 
Okay, some changes. I had a few people reach out to me, new date and time is
8:00pm on Monday for http://thesaltypig.com/.

I suggest maybe we meet at the summit mixer and walk over to the restaurant
together.

Expect an email on Monday for an exact location to meet.

-PB

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Boston 2017 Summit dinner

2017-05-04 Thread Paul Belanger
On Thu, Apr 27, 2017 at 08:47:58PM -0400, Paul Belanger wrote:
> Greetings!
> 
> Its that time where we all try to figure out when and where to meet up for 
> some
> dinner and drinks in Boston. While I haven't figure out a place to eat
> (suggestion most welcome), maybe we can decide which night to go out.
> 
> As a reminder, the summit schedule has 2 events this year that people may also
> be attending:
> 
>   Mon 8, 6:00pm - 7:30pm - Marketplace Mixer
>   Tue 9, 7:00pm - 10:00pm - StackCity Boston at Fenway Park
> 
> Please take a moment to reply, and which day may be better for you.
> 
>   Sunday: Yes
>   Monday: Yes
>   Tuesday: No
>   Wednesday: Yes
>   Thursday: No
> 
> And, if you have a resturant in mind, please share.
> 
Looks like Sunday might be our best day? Is there any objection on maybe having
some early dinner and drinks that day?

Since nobody has suggested a location, I am going to attempt reservations at
http://thesaltypig.com/ @ 5pm.

-PB

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Boston 2017 Summit dinner

2017-04-27 Thread Paul Belanger
Greetings!

Its that time where we all try to figure out when and where to meet up for some
dinner and drinks in Boston. While I haven't figure out a place to eat
(suggestion most welcome), maybe we can decide which night to go out.

As a reminder, the summit schedule has 2 events this year that people may also
be attending:

  Mon 8, 6:00pm - 7:30pm - Marketplace Mixer
  Tue 9, 7:00pm - 10:00pm - StackCity Boston at Fenway Park

Please take a moment to reply, and which day may be better for you.

  Sunday: Yes
  Monday: Yes
  Tuesday: No
  Wednesday: Yes
  Thursday: No

And, if you have a resturant in mind, please share.

-PB

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Removal of infra-root shell accounts from nodepool DIBs

2017-04-19 Thread Paul Belanger
Greetings!

To make sure everybody is aware, we have just approved cmurphy's patch[1] to
stop installing infra-root users onto our diskimages for nodepool. Fear not,
infra-root users will now be able to directly use the root user to access our
zuul worker nodes.

We'll now be using ansible-role-cloud-launcher[2] to populate the
infra-root-keys keypair for all our clouds. This means that glean will then
inject our keypairs into the authorized_keys file for the root user.

One step closer to dropping puppet from our image build process.

-PB

[1] https://review.openstack.org/#/c/450029/
[2] https://review.openstack.org/#/c/457712/

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] New caching reverse proxies

2017-04-04 Thread Paul Belanger
On Tue, Apr 04, 2017 at 11:12:38AM -0700, Clark Boylan wrote:
> Hello,
> 
> Just a heads up that we are experimenting with new caching reverse
> proxies on our cloud region local mirror nodes. The idea here being that
> not every hosted resource is easily mirrorable but we can pretty easily
> cache them locally in regions to reduce errors and speed up downloads.
> More details can be found in https://review.openstack.org/#/c/453241/
> (reviews welcome). The config is all done in the existing mirror host
> apache configuration.
> 
> Currently testing it out with RDO (rsyncs are slow and updates frequent
> so not good AFS mirroring candidate until those items can be sorted out)
> and the puppet modules and tripleo are consuming RDO from here. Possible
> future work includes proxying dockerhub for container image downloads.
> 
> Clark
> 
Thanks for suggesting them! They appear to be working really well. Look forward
to testing hub.docker.com.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [Cyborg][meetbot]Could anyone help to merge the long-approved meetbot patch ?

2017-03-13 Thread Paul Belanger
On Tue, Mar 14, 2017 at 01:22:12AM +0800, Zhipeng Huang wrote:
> https://review.openstack.org/#/c/421322/
> 
> It has been ages and would appreciate if anyone could help merge the patch.
> 
We'll have to hold off until after 23:00 UTC, to merge this. We have no meetings
during that period and will be safe to merge.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [zuul] Feedback requested for tox job definition

2017-03-08 Thread Paul Belanger
On Wed, Mar 08, 2017 at 12:03:21PM -0500, David Shrewsbury wrote:
> They're both fairly easy to understand. I think the multi-playbook option
> might make any customization that we might need to do later a bit easier,
> if that's something we foresee
> doing to these playbooks. If they're pretty much set in stone as they are
> though, I don't
> think it will matter much either way.
> 
Ya, so far 438281 (multi-playbook) seems to be the front runner. But like you
said, none of this is set in stone. It is completely possible some time down the
road another option will be more valid.

For me, 438281, is also easy mode for our JJB conversion.  I've tried to base
the playbook names on how our JJB currently looks today.  Not to say this is the
right / wrong approach, but I think it will make things a little easier for
projects (even openstack-infra) to convert from JJB to ansible.

Once we move away from our nodepool/scripts we bake into images, I imagine we
might have this discussion again.

> -Dave
> 
> On Wed, Mar 8, 2017 at 10:33 AM, Paul Belanger 
> wrote:
> 
> > On Wed, Mar 08, 2017 at 10:20:23AM -0500, Paul Belanger wrote:
> > >
> > > Greetings,
> > >
> > > Allow me to bring to your attention a series of patches which create our
> > first
> > > zuulv3 jobs. Specifically, we are looking to discuss what a generic tox
> > job in
> > > ansible will look like.
> > >
> > > Currently, we have 2 proposed patches for zuul (feature/zuulv3) branch
> > > available:
> > >
> > > Generic tox (single playbook)
> > > - https://review.openstack.org/438281
> > >
> > Apologies, ^ is our multiple playbooks
> >
> > > Generic tox (multi playbook)
> > > - https://review.openstack.org/442180
> > >
> > ^ is our single playbook
> >
> > > Starting with 438281, the main differences lay within the .zuul.yaml
> > file. As
> > > you can see by looking at the code, we are not defining any variables
> > (vars) in
> > > .zuul.yaml. This means, we create 3 separate playbooks (tox-cover.yaml,
> > > tox-py27, tox-linters.yaml) which then contain the variables we need for
> > our tox
> > > role.
> > >
> > > With 442180, we move our tox role variables into .zuul.yaml (vars
> > section) and
> > > use a single playbook (tox.yaml) as our entry point for each job.
> > >
> > > Everything else between the 2 patches is the same. So, with that in
> > mind, which
> > > patch do people prefer?
> > >
> > > -PB
> > >
> > > ___
> > > OpenStack-Infra mailing list
> > > OpenStack-Infra@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> >
> > ___
> > OpenStack-Infra mailing list
> > OpenStack-Infra@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> >
> 
> 
> 
> -- 
> David Shrewsbury (Shrews)

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [zuul] Feedback requested for tox job definition

2017-03-08 Thread Paul Belanger
On Wed, Mar 08, 2017 at 10:20:23AM -0500, Paul Belanger wrote:
> 
> Greetings,
> 
> Allow me to bring to your attention a series of patches which create our first
> zuulv3 jobs. Specifically, we are looking to discuss what a generic tox job in
> ansible will look like.
> 
> Currently, we have 2 proposed patches for zuul (feature/zuulv3) branch
> available:
> 
> Generic tox (single playbook)
> - https://review.openstack.org/438281
> 
Apologies, ^ is our multiple playbooks

> Generic tox (multi playbook)
> - https://review.openstack.org/442180
> 
^ is our single playbook

> Starting with 438281, the main differences lay within the .zuul.yaml file. As
> you can see by looking at the code, we are not defining any variables (vars) 
> in
> .zuul.yaml. This means, we create 3 separate playbooks (tox-cover.yaml, 
> tox-py27, tox-linters.yaml) which then contain the variables we need for our 
> tox
> role.
> 
> With 442180, we move our tox role variables into .zuul.yaml (vars section) and
> use a single playbook (tox.yaml) as our entry point for each job.
> 
> Everything else between the 2 patches is the same. So, with that in mind, 
> which
> patch do people prefer?
> 
> -PB
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] [zuul] Feedback requested for tox job definition

2017-03-08 Thread Paul Belanger

Greetings,

Allow me to bring to your attention a series of patches which create our first
zuulv3 jobs. Specifically, we are looking to discuss what a generic tox job in
ansible will look like.

Currently, we have 2 proposed patches for zuul (feature/zuulv3) branch
available:

Generic tox (single playbook)
- https://review.openstack.org/438281

Generic tox (multi playbook)
- https://review.openstack.org/442180

Starting with 438281, the main differences lay within the .zuul.yaml file. As
you can see by looking at the code, we are not defining any variables (vars) in
.zuul.yaml. This means, we create 3 separate playbooks (tox-cover.yaml, 
tox-py27, tox-linters.yaml) which then contain the variables we need for our tox
role.

With 442180, we move our tox role variables into .zuul.yaml (vars section) and
use a single playbook (tox.yaml) as our entry point for each job.

Everything else between the 2 patches is the same. So, with that in mind, which
patch do people prefer?

-PB

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Ask.o.o down

2017-03-07 Thread Paul Belanger
On Tue, Mar 07, 2017 at 07:39:13PM +1100, Ian Wienand wrote:
> On 03/07/2017 07:20 PM, Gene Kuo wrote:
> > These errors do line up to the time where it's down.
> > However, I have no idea what cause apache to seg fault.
> 
> Something disappearing underneath it would be my suspicion
> 
> Anyway, I added "CoreDumpDirectory /var/cache/apache2" to
> /etc/apache2/apache2.conf manually (don't think it's puppet managed?)
> 
> Let's see if we can pick up a core dump, we can
> at least then trace it back
> 
You could do something like 359278[1].

[1] https://review.openstack.org/#/c/359278

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Adding projects to zuulv3-dev.o.o

2017-03-02 Thread Paul Belanger
On Thu, Mar 02, 2017 at 02:15:54PM -0800, James E. Blair wrote:
> Paul Belanger  writes:
> 
> > Greetings!
> >
> > I wanted to start a thread about what people thought about expanding our
> > coverage of projects a little. I've been working on ansible roles for CI 
> > things
> > for a while, and figure it might be a good first step to have zuulv3 test a
> > role[1] to install zuul with ansible.
> >
> > So far, it is up to date with feature/zuulv3 branch and support 
> > ubuntu-trusty,
> > ubuntu-xenial, fedora-25 and centos-7.  I've been using it locally for 
> > testing
> > environments for a while now and would love to start importing it into 
> > zuulv3.
> 
> Well, I don't want to expand Zuul v3 coverage for its own sake yet, for
> all of the reasons mentioned in Monty and Robyn's emails (stability!
> security!).  However, I do think we should start exercising some role
> dependencies, and getting an ansible-based all-in-one deployment is a
> near-term goal, so I think that would be a good restrained addition to
> our coverage.
> 
> Having a job on the zuulv3 repo which uses that role to deploy an
> all-in-one zuul would be a good way of advancing both of those goals.
> 
Right, so what I have been working on, playbooks exist in windmill[1] for this.
It will stand up zuul+nodepool+zookeeper for all-in-one, or multi-node.  I've
mostly been using the repo to validate things install and start properly, even
build some DIBs (see recent console log[2]).

Long term, I was wanting to expend integration testing with it, over expainding
it for production / 3rd party usage.  But, it works today, and is an easy way to
install everything and make sure things are running.

If we want to keep iterating on openstack-infra/zuul, I can copy some playbook
over or propose a patch to layout.yaml. I'm happy to do what people would like.

[1] http://git.openstack.org/cgit/openstack/windmill/tree/playbooks
[2] 
http://logs.openstack.org/11/439611/6/gate/gate-windmill-deploy-ubuntu-xenial/a68d81a/console.html

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] Adding projects to zuulv3-dev.o.o

2017-03-02 Thread Paul Belanger
Greetings!

I wanted to start a thread about what people thought about expanding our
coverage of projects a little. I've been working on ansible roles for CI things
for a while, and figure it might be a good first step to have zuulv3 test a
role[1] to install zuul with ansible.

So far, it is up to date with feature/zuulv3 branch and support ubuntu-trusty,
ubuntu-xenial, fedora-25 and centos-7.  I've been using it locally for testing
environments for a while now and would love to start importing it into zuulv3.

Thoughts?

[1] http://git.openstack.org/cgit/openstack/ansible-role-zuul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [infracloud] upgrade from mitaka to ocata

2017-03-02 Thread Paul Belanger
On Thu, Mar 02, 2017 at 03:06:26PM -0500, Emilien Macchi wrote:
> Hey,
> 
> I'm working on updating the Puppet manifests to deploy Infracloud on
> OpenStack Ocata:
> https://review.openstack.org/#/c/436503/
> 
> The plan is to support both Mitaka & Ocata in the infracloud module
> until we make the switch.
> Please help in reviewing this patch and let me know any question in the 
> review.
> 
> I plan to work on the next steps once this patch is merged: update
> system-config with new parameters, etc.
> 
Cool, I'll take a look.

I think it would make sense to get something on the books a few weeks out, maybe
starting with infracloud-vanilla.  I know I am interested in trying bifrost
again, but happy to work with others if they would like to get involved.

When we get to the point to upgrade infracloud-chocolate, we should split out
the hardware into the infracloud-strawberry region.

Another thing we need to consider, is if our hardware is moving locations or
not.

-PB

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [openstack-infra][fuxi-kubernetes]request to be added to the groups in gerrit

2017-03-02 Thread Paul Belanger
On Thu, Mar 02, 2017 at 07:16:18AM +, Zhangni wrote:
> Hi Infra Team,
> 
> Thanks for helping merging the patch that created project fuxi-kubernetes, 
> now based on my understanding from 
> https://docs.openstack.org/infra/manual/creators.htm, could you please add me 
> to fuxi-kubernetes-core in gerrit so that I could add others?
> 
> Thank you very much!
> 
> Zhangni
> 
Done. If you are adding another project it the future, it is alway nice to see
the review (patcset) in question when requestion group permissions.  I say nice,
because since I haven't had coffee this morning, I feel pretty lazy searching
for things :)

Enjoy!

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [third-party][ci] Help needed with Gerrit CI accounts

2017-02-28 Thread Paul Belanger
On Tue, Feb 28, 2017 at 06:15:37PM +, Ravi, Goutham wrote:
> Paul, 
> 
> Thank you. I’ve moved ng-openstack...@netapp.com to the new account. Can you 
> disable:
> * netapp-fc-ci (Initially registered with: 
> ng-openstack...@netapp.com)<mailto:ng-openstack...@netapp.com)>
> * netapp-ci 
> (xdl-openstack-jenk...@netapp.com)<mailto:xdl-openstack-jenk...@netapp.com)>
> 
Done

> Also, can these be added to the third-party-ci mail filtering list: 
> https://review.openstack.org/#/admin/groups/270
> * NetApp-ci
> * netapp-eseries-ci
> 
Also done.
> 
> Appreciate your help!
> Goutham
> 
> 
> On 2/28/17, 10:04 AM, "Paul Belanger"  wrote:
> 
> On Tue, Feb 28, 2017 at 01:56:00PM +, Ravi, Goutham wrote:
> > Resending with correct tags:
> > 
> > Hi,
> > 
> > We (NetApp) made some changes to our CI system infrastructure and I’m 
> looking for help regarding the following:
> > 
> > 
> > 1)  Deletion request for older gerrit accounts:
> > 
> > We have CI accounts for each of our storage systems reporting results 
> in Cinder/Manila:
> > 
> > - NetApp-ci  - 
> https://wiki.openstack.org/wiki/ThirdPartySystems/NetApp_CI
> > - netapp-eseries-ci - 
> https://wiki.openstack.org/wiki/ThirdPartySystems/NetApp_Eseries_CI
> > 
> > Can you help deleting these two older accounts on review.openstack.org?
> > 
> > 
> > · netapp-fc-ci (Initially registered with: 
> ng-openstack...@netapp.com)<mailto:ng-openstack...@netapp.com)>
> > 
> > · netapp-ci 
> (xdl-openstack-jenk...@netapp.com)<mailto:xdl-openstack-jenk...@netapp.com)> 
> - This one was created prior to the self-service process.
> > 
> > 
> > (I would like to reuse the email ID: 
> ng-openstack...@netapp.com<mailto:ng-openstack...@netapp.com> for the 
> “NetApp-ci” account.
> > Deleting the netapp-fc-ci account would allow me to claim that email 
> address)
> > 
> > 
> > 2)  Addition of “NetApp-ci” and “netapp-eseries-ci” to the third 
> party CI mail filtering list: 
> (https://review.openstack.org/#/admin/groups/270)?
> > 
> > Your help is appreciated!
> > 
> > Thanks,
> > Goutham
> 
> We don't usually delete gerrit accounts, just disable them.  For the most 
> part,
> you are able to change the email addresses yourself via your profile in 
> gerrit.
> Do you have access to both accounts?
> 
> What I suggest, move ng-openstack...@netapp.com to the proper account, 
> which you
> want to keep. And any other accounts, we can simply disable.
> 
> ---
> Paul
> 
> 

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [third-party][ci] Help needed with Gerrit CI accounts

2017-02-28 Thread Paul Belanger
On Tue, Feb 28, 2017 at 01:56:00PM +, Ravi, Goutham wrote:
> Resending with correct tags:
> 
> Hi,
> 
> We (NetApp) made some changes to our CI system infrastructure and I’m looking 
> for help regarding the following:
> 
> 
> 1)  Deletion request for older gerrit accounts:
> 
> We have CI accounts for each of our storage systems reporting results in 
> Cinder/Manila:
> 
> - NetApp-ci  - https://wiki.openstack.org/wiki/ThirdPartySystems/NetApp_CI
> - netapp-eseries-ci - 
> https://wiki.openstack.org/wiki/ThirdPartySystems/NetApp_Eseries_CI
> 
> Can you help deleting these two older accounts on review.openstack.org?
> 
> 
> · netapp-fc-ci (Initially registered with: 
> ng-openstack...@netapp.com)
> 
> · netapp-ci 
> (xdl-openstack-jenk...@netapp.com) 
> - This one was created prior to the self-service process.
> 
> 
> (I would like to reuse the email ID: 
> ng-openstack...@netapp.com for the 
> “NetApp-ci” account.
> Deleting the netapp-fc-ci account would allow me to claim that email address)
> 
> 
> 2)  Addition of “NetApp-ci” and “netapp-eseries-ci” to the third party CI 
> mail filtering list: (https://review.openstack.org/#/admin/groups/270)?
> 
> Your help is appreciated!
> 
> Thanks,
> Goutham

We don't usually delete gerrit accounts, just disable them.  For the most part,
you are able to change the email addresses yourself via your profile in gerrit.
Do you have access to both accounts?

What I suggest, move ng-openstack...@netapp.com to the proper account, which you
want to keep. And any other accounts, we can simply disable.

---
Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [openstack-infra][fuel-ccp-designate] Repository empty core-group

2017-02-14 Thread Paul Belanger
On Tue, Feb 14, 2017 at 01:06:08PM +0400, Peter Razumovsky wrote:
> No one from our team have no rights to merge patches to fuel-ccp-designate.
> Please, help us to land patches asap, it's very important! Gerrit patch
> with created project: https://review.openstack.org/#/c/425035/
> 
Looks like you are properly added to the group now.

> 2017-02-13 18:13 GMT+04:00 Peter Razumovsky :
> 
> >
> > -- Forwarded message --
> > From: Peter Razumovsky 
> > Date: 2017-02-13 13:21 GMT+04:00
> > Subject: [openstack-infra][fuel-ccp-designate] Repository empty core-group
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> > openstack-...@lists.openstack.org>
> >
> >
> > Hi openstack-infra!
> >
> > I recently added new repo named fuel-ccp-designate [1], but we can't merge
> > any patch due to no one consists in fuel-ccp-designate core-group. Can you
> > add me to core-group, and then I'll add other cores to it. Thanks!
> >
> > [1] https://github.com/openstack/fuel-ccp-designate
> >
> > --
> > Best regards,
> > Peter Razumovsky
> >
> >
> >
> > --
> > Best regards,
> > Peter Razumovsky
> >
> 
> 
> 
> -- 
> Best regards,
> Peter Razumovsky

> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] The new openstack.cl...@gmail.com user

2017-02-14 Thread Paul Belanger
On Tue, Feb 14, 2017 at 03:42:53PM +0530, Amrith Kumar wrote:
> It appears that a user registered as "openstack.cl...@gmail.com" has emerged
> as a new bot that is posting a comment on numerous reviews with comments
> like:[1]
> 
> "Patch Set 1: Code-Review-1
> 
> Not urgent to merge just before Ocata is released. And please save load to
> the CI system."
> 
> "Patch Set 1: Code-Review-1
> 
> With more than 50 same topics. Such work seems to be for some marketing
> slides. Why not submit the same corrections in one patch?"
> 
> "Patch Set 1: Code-Review-1
> 
> Such work seems to be for some marketing slides, not for OpenStack
> technology. Why not submit the same corrections in one patch?"
> 
> Would someone in Infra or the TC please take a look into who this anonymous
> user is? I believe that to post reviews, one must register and sign a CLA.
> Could someone look into who may have signed this CLA?
> 
> Thanks,
> 
> -amrith 
> 
> [1]
> https://review.openstack.org/#/q/reviewer:%22OpenStack+Clean+%253Copenstack.
> clean%2540gmail.com%253E%22
> 
> 
> 
> --
> Amrith Kumar
> amrith.ku...@gmail.com
> +1-978-563-9590
> GPG: 0x5e48849a9d21a29b
> 
I've gone a head a disabled the account for now. Until we can look more into it.

> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] PTG team dinner?

2017-02-08 Thread Paul Belanger
On Tue, Feb 07, 2017 at 04:28:26PM -0800, Clark Boylan wrote:
> Hello,
> 
> We failed at doing a lunch or dinner together in Barcelona because
> ETOOMANYTHINGS. Wondering if there is interest in attempting to do a
> dinner during the PTG? Would likely have to be Monday night since I
> expect some people will leave late Tuesday.
> 
> Not sure how many of us are interested and will be at the PTG, but maybe
> we can try putting something together if we get a rough headcount?
> 
> I'm in for what its worth. Also noticed there is a Trader Vic's nearby,
> we could all dress in Hawaiian shirts like fungi :)
> 
Works for me.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [opensatck-infra][nodepool] HTTP 405 when deleting nodes (computers in jenkins terms).

2017-02-07 Thread Paul Belanger
On Mon, Feb 06, 2017 at 06:10:18PM -0500, Bob Hansen wrote:
> 
> I'm using these versions:
> nodepool 0.3.0 (yes a bit old, looks like 0.4.1 is current, but the code in
> this area seems to be the same).
> python-jenkins 0.4.13
> zuul 2.1.1
> jenkins 1.651.2
> ubuntu 14.04 LTS.
> diskimage-builder 1.26.1
> 
> Images create ok, nodes are created in jenkins with my images and jobs are
> dispatched to the jenkins slaves the node pool creates. All is well.
> 
> However, when nodepool tries to delete a slave in jenkins, I get this
> exception:
> 
> 2017-02-06 14:10:01,096 ERROR nodepool.NodeDeleter: Exception deleting node
> 1362:
> . skipped stack trace ...
>   File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
> raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
> HTTPError: HTTP Error 405: Method Not Allowed
> 
> Digging through the code, it appears that nodepool via python-jenkins is
> really just doing this, which returns the 405.
> 
> https://zvm-dev-jenkins.pokprv.stglabs.ibm.com/computer/test-master-zvm-dev-controller-1377/doDelete
> 
> If a do a similar thing from the jenkins dashboard, just plunking in  that
> url in a browser after I'm authenticated to jenkins. Jenkins tells me I
> must to do an HTTP POST not a GET and the code returned is a 405.
> 
> The best I can tell is the is python-jenkins is doing a GET rather than a
> POST when deleting nodes?
> 
> Anyone have a solution on how to get this to work? Obviously other users of
> nodepool must have this working.
> 
> Thanks for any help!
> 
We in openstack-infra have not see this issue, mostly because we have migrated
away from Jenkins. What i would suggest, is updating python-jenkins to do what
you have suggested, switch from GET to POST and see if nodepool becomes happy.

I did take a quick look into review.openstack.org, but didn't see any existing
patches around this issue.

---
Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Registration

2017-02-02 Thread Paul Belanger
On Thu, Feb 02, 2017 at 09:42:39AM +0100, Carmine Annunziata wrote:
> Hi, i'm Carmine and i would join to openstack community on irc. How can i
> access?

We currently have a few wiki pages that explain the process:

https://wiki.openstack.org/wiki/IRC
https://wiki.openstack.org/wiki/UsingIRC

The OpenStack project uses https://freenode.net/ for the hosting of our
channels.  They also have some information about connecting to freenode.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Need to be added to new project core group

2017-02-02 Thread Paul Belanger
On Thu, Feb 02, 2017 at 12:30:01PM -0500, David Moreau Simard wrote:
> Hi,
> 
> The project "ansible-role-ara" was created recently [1].
> 
> Could you please add me ( dms AT redhat.com ) in the core and release groups ?
> I'll take care of the rest of the permissions.
> 
> Thanks,
> 
> [1]: https://review.openstack.org/#/c/426818/
> 
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
> 
> dmsimard = [irc, github, twitter]
> 
Done, go forth and create awesome things!

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Unable to add myself to the python-redfish-core group I created

2017-01-24 Thread Paul Belanger
On Tue, Jan 24, 2017 at 12:29:54AM +0100, Bruno Cornec wrote:
> Bruno Cornec said on Tue, Jan 24, 2017 at 12:09:46AM +0100:
> > Oops, yes, I was trying to compare to anther group to see what I could
> > do. Thanks a lot Jeremy for having aproved myself.
> > 
> > I'll now add the other members of these groups and start contributing !
> 
> Worked like a charm now !
> 
> As we're at it, could you do the same for the other project that I have 
> create (Alexandria) with the 2 groups:
> alexandria-core (https://review.openstack.org/#/admin/groups/1667,members)
> alexandria-release (https://review.openstack.org/#/admin/groups/1668,members)
> 
> Thanks in advance,
> Bruno.
> -- 
Done

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Unable to add myself to the python-redfish-core group I created

2017-01-23 Thread Paul Belanger
On Mon, Jan 23, 2017 at 09:10:56PM +0100, Bruno Cornec wrote:
> Hello,
> 
> I'm unable to add myself to the python-redfish-core group I created.
> 
> When using the Web interface at 
> https://review.openstack.org/#/admin/groups/99,members the fields are greyed 
> and I cannot follow the doc at 
> https://review.openstack.org/Documentation/access-control.html to add myself 
> to the group.
> 
You cannot self approve yourself to a gerrit group, so in the example of
trove-core, you need to ask the trove PTL for the rights.

> I tried to use another method:
> 
> ssh -p 29418 bruno-cor...@review.openstack.org gerrit set-members 
> python-redfish-core --add bruno-cornec
> fatal: internal server error
> 
> So I think I need help from an admin to be able to modify that group.
> 
> I have the same issue with the other group python-redfish-release at 
> https://review.openstack.org/#/admin/groups/1649,members
> This is annoying as I cannot +2 our first patch 
> https://review.openstack.org/#/c/410852/7 since the integration of the 
> project !
> 
> Any help to solve this is welcome.
> Thanks in advance and best regards,
> Bruno.
> -- 

It is currently a manual process to be added to the gerrit group[1]. I've gone a
head an done that.

[1] 
http://docs.openstack.org/infra/manual/creators.html#update-the-gerrit-group-members

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Nodepool config file structure

2017-01-10 Thread Paul Belanger
On Mon, Jan 09, 2017 at 01:58:14PM -0800, James E. Blair wrote:
> cor...@inaugust.com (James E. Blair) writes:
> 
> >> Yup, I think this makes sense and avoids duplicate image data. One other
> >> similarish use case that I don't think this addresses that we should
> >> consider is the one we had in hpcloud and what we do in osic-cloud1
> >> currently. Basically chunk up a provider in several different ways to
> >> affect distribution of nodes based on attributes within that provider. I
> >> don't have any great ideas for how that might look right now, but wonder
> >> if that might also solve the flavor problem. Probably something to think
> >> about before we commit to this.
> >
> > Yeah, I don't think this addresses that problem.  I suspect a real
> > solution to it would look a lot different than what we have now.  I'm
> > open to suggestions.
> 
> Perhaps something like this?  It creates a new "pools" section which is
> groups of instances+labels within a provider.  The actual image uploads
> are still at the provider level.  It's a bit more complicated, in that
> we have to explicitly cross-link the label to the cloud image (the
> previous suggestion implicitly did that by being underneath it in the
> yaml hierarchy).  It does provide a nice retcon for 'nodepool' though.
> :)
> 
>   labels:
> - name: small-ubuntu-trusty
>   ready-script: configure_mirror.sh
>   min-ready: 1
> 
>   providers:
> - name: cloud
>   api-timeout: 60
>   diskimages:
>   - name: ubuntu-trusty
> metadata:
>   foo: bar
>   pools:
> - name: s3500
>   max-servers: 256
>   networks:
> - name: 'GATEWAY_NET_V6'
>   public: True
> labels:
>   - name: small-ubuntu-trusty
>   diskimage: ubuntu-trusty
> ram: 2g
>   - name: large-ubuntu-trusty
>   diskimage: ubuntu-trusty
> ram: 8g
> - name: s3700
>   max-servers: 256
>   networks:
> - name: 'GATEWAY_NET_V6'
>   public: True
> labels:
>   - name: small-ubuntu-trusty
>   diskimage: ubuntu-trusty
> ram: 2g
>   - name: large-ubuntu-trusty
>   diskimage: ubuntu-trusty
> ram: 8g
> 
>   diskimages:
> - name: ubuntu-trusty
>   private-key: /home/nodepool/.ssh/id_rsa
>   elements: ...
> 
Took a while to understand, but I do now. Yes, this is nice actually.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Nodepool config file structure

2016-12-20 Thread Paul Belanger
On Tue, Dec 20, 2016 at 10:19:26AM -0800, James E. Blair wrote:
> Hi,
> 
> I've been working on reconciling a confusing part of the Nodepool config
> file: the mapping between labels, provider images, and diskimages.
> 
> Broadly speaking the three constructs are:
> 
> diskimages: the file(s) that DIB produces
> provider images: a disk image uploaded to a provider, combined with
>   flavor information
> labels: a set of nodes created from provider images
> 
> I think the biggest problem is that provider images are combining two
> concepts which should be distinct: uploaded diskimages and flavors used
> for launching nodes.  They hold at least one piece of data for each of
> these functions: the image metadata (which is used at upload time) and
> the flavor (used at node creation time), which makes it impossible to
> disentangle them without a change.
> 
> Adding to the confusion is simply the use of the word 'images' in
> provider images, as it's a little unclear what an image is in this
> context (a single diskimage?  one of may images uploaded to the cloud?
> an artificial construct combining the two?)
> 
> The literal way to reconcile the current configuration syntax would be to
> upload the same diskimage to a provider multiple times, each as a
> different provider image (with its own set of metadata).  That seems
> very wasteful.
> 
> Because we did not include support for that (nor should we have, I
> think), we needed to take at least one step in the direction of
> correcting this situation when making the new zookeeper builders.  To
> that end, we merged this change:
> 
>   https://review.openstack.org/396749
> 
> which removed the diskimage parameter from the provider image stanza.
> This made an implicit 1:1 connection between the provider images and
> diskimages, based on their having the same name.  That means that we
> know which metadata to use when uploading a diskimage to a provider.
> However, it also means that there is now not a way to specify more than
> one flavor with the same image.  That is not a feature we use currently
> (especially since the allocator does not take flavor size into acconut)
> but it is one we want to have in the future.
> 
> I don't see a way to support both of these features (single upload and
> multiple flavors) without a change to the configuration.  I propose that
> we keep 396749 in place, and iterate forward.
> 
> I think we should change the provider images section to separate out the
> parts pertaining to diskimages and those pertaining to flavors.
> Something like:
> 
>   labels:
> - name: small-ubuntu-trusty
>   ready-script: configure_mirror.sh
>   min-ready: 1
>   
>   providers:
> - name: cloud
>   diskimages:
> - name: ubuntu-trusty
>   metadata:
> foo: bar
>   labels:
> - name: small-ubuntu-trusty
>   ram: 2g
> - name: large-ubuntu-trusty
>   ram: 8g
>   
What are you visioning for the image-type key? I only bring it up since we've
dropped 'images' here.

>   diskimages:
> - name: ubuntu-trusty
>   private-key: /home/nodepool/.ssh/id_rsa
>   elements: ...
> 
> That also lets us remove the 'providers' section from each label
> definition.  That is used to indicate which providers should be used to
> create nodes of each label, but by associating labels with provider
> copies of diskimages, it is simple to add or remove those label entries
> (which would not affect the diskimage entry, whose addition/removal
> would cause image uploads or deletions).  I also moved the 'private-key'
> attribute to the diskimage section, since that should not differ by
> provider.
> 
++ to providers removal.

On the fence about private-key moving to diskimages.  But, understand why, in
our nodepool.yaml we have 74 entries for it; all the same. In the use case I am
thinking of, where using nodepool-builder just to build images, there wouldn't
be need for a private-key. We'd only use that after uploading.

> Does this sound like a reasonable path forward?
> 
> Thanks,
> 
> Jim
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [infra] Please add members to novajoin-* groups

2016-12-07 Thread Paul Belanger
On Tue, Dec 06, 2016 at 10:38:12PM -0500, Rob Crittenden wrote:
> Can you add rcritten at redhat.com and alee at redhat.com to
> novajoin-core and novajoin-release? I thought they had been added during
> the group creation project but apparently not.
> 
I've added rcrit...@redhat.com to both groups, since that was the original owner
of the review request.  You'll now be free to add more users as you see fit.

> regards
> 
> rob
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [nitrous] add member to group fuel-plugin-nitrous-core

2016-12-05 Thread Paul Belanger
On Mon, Dec 05, 2016 at 10:47:49AM -0600, Omar Rivera wrote:
> Hello Infra Team,
> 
> Please, add me  to fuel-plugin-nitrous-core[1] group.
> 
> My gerrit profile is below:
>   Full Name   Omar Rivera
>   Email Address gomariv...@gmail.com
> 
> New project created with help of infra-team.
> [1] https://review.openstack.org/#/c/400429/
> 
> Thank you,
> Omar Rivera

I just tried doing this, it looks like you have 2 gerrit accounts. Please join
#openstack-infra on freenode so we can figure out which account it the correct.
Then we'll disable the other.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [ui-cookiecutter][horizon] init ui-cookiecutter-core group

2016-11-28 Thread Paul Belanger
On Mon, Nov 28, 2016 at 12:56:52AM +, Shuu Mutou wrote:
> Hi,
> 
> Is anyone able to handle this?
> 
Added. Go forth and develop awesome things!

> Regards,
> Shu Muto
> 
> > -Original Message-
> > From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> > Sent: Monday, November 21, 2016 9:53 AM
> > To: openstack-infra@lists.openstack.org
> > Subject: [OpenStack-Infra] [ui-cookiecutter][horizon] init
> > ui-cookiecutter-core group
> > 
> > Hi peers,
> > 
> > I created UI-Cookiecutter project[1] with help of infra-team.
> > [1] https://review.openstack.org/#/c/398748/
> > 
> > Since no one join in gerrit group ui-cookiecutter-core yet, could someone
> > add me to ui-cookiecutter-core?
> > 
> > My gerrit profile is below:
> >   Usernameshu.mutow
> >   Full NameShu Muto
> >   Email Addressshu-mu...@rf.jp.nec.com
> > 
> > 
> > Best regards,
> > Shu Muto
> > 
> > ___
> > OpenStack-Infra mailing list
> > OpenStack-Infra@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Scheduling a Zuul meeting

2016-11-02 Thread Paul Belanger
On Wed, Nov 02, 2016 at 10:30:01AM -0700, James E. Blair wrote:
> Hi,
> 
> Recently, a bunch of folks have expressed an interest in helping with
> Zuul v3.  With more than a handful of people engaged in active
> development, it seems like it might be time to adopt some useful
> development practices.  One of those is scheduling a weekly meeting, the
> other is task tracking.
> 
> The meeting would be a time and place for everyone engaged in Zuul
> development to discuss what they are doing with everyone else, get into
> as much detail as is needed, and help us ensure we're making the
> progress we want to.
> 
> Of course, we will still use the #zuul channel to talk about things as
> they come up.  And we will still participate in the Infrastructure
> meeting as needed to coordinate with the wider team.
> 
> As our team includes people in the Middle East, Europe, North America,
> and Oceania, I propose that we schedule the meeting for Monday at 20:00
> UTC. (19:00 would be a good alternative if 20:00 is bad for folks.
> Straying too far from that tends to make things more difficult for folks
> on either end of our timezone spectrum.)
> 
> Clint Byrum (SpamapS) has volunteered to help organize the tasks needed
> to make progress and encode them into Storyboard.  This will be a bit of
> a work-in-progress for a short while as we work on articulating what's
> needed to grow from a few developers to many more, but at the end of the
> process both Storyboard and our meeting should serve to provide a
> clearer picture of what needs to be done and a fluid mechanism for
> accomplishing it.
> 
> Please let me know if the proposed time (Monday, 20:00 UTC) works for
> you, or if an alternate time would be better.
> 
Monday, 20:00 UTC works for me.

> Thanks,
> 
> Jim
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] pypi mirrors out of sync

2016-10-12 Thread Paul Belanger
On Wed, Oct 12, 2016 at 11:40:44PM +1100, Tony Breeds wrote:
> On Thu, Sep 22, 2016 at 12:28:48PM +1000, Tony Breeds wrote:
> > Hi All,
> > I know a lot of the infra team are in Germany for the sprint, howveer 
> > I'm
> > see what seems like a lot of upper-constraint bumps that are failing due to
> > mirrors being out of sync.
> 
> This seem to have happened again.
> 
> In review https://review.openstack.org/#/c/385099/ specifically the logs [1]
> 
> we see:
> ---
> Could not find a version that satisfies the requirement oslo.policy===1.15.0 
> (from -c 
> /home/jenkins/workspace/gate-cross-nova-python27-db-ubuntu-xenial/upper-constraints.txt
>  (line 225)) (from versions: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.3.2, 0.4.0, 0.5.0, 
> 0.6.0, 0.7.0, 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 1.0.0, 1.1.0, 
> 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0, 
> 1.12.0, 1.13.0, 1.14.0)
> No matching distribution found for oslo.policy===1.15.0 (from -c 
> /home/jenkins/workspace/gate-cross-nova-python27-db-ubuntu-xenial/upper-constraints.txt
>  (line 225))
> ---
> 
> A quick check of https://pypi.python.org//simple/oslo-policy/ vs
> http://mirror.regionone.osic-cloud1.openstack.org/pypi/simple/oslo-policy/
> shows that 1.15.0 is on pypi but not our mirrors
> 
> For the sake of transparency this is basically the same thing I mentioned on
> IRC[2]
> 
> Any chance we can get a manual run (of bandersnatch?) to clear the issues?
> 
> Yours Tony.
> 
> [1] 
> htitp://logs.openstack.org/99/385099/1/check/gate-cross-nova-python27-db-ubuntu-xenial/3468229/console.html#_2016-10-12_04_18_53_131481
>  
> [2]
> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-10-12.log.html#t2016-10-12T05:29:20

Over this past weekend, I noticed our AFS mirror.pypi directory quota was full.
So, I increased our quota size, however I noticied bandersnatch seemed be stuck
downloading some files.

I first removed the TODO files, but apparently that didn't solve the issue.
Since diskimage-builder 1.21.0 was still not on our mirror.  I nexted forced
what I thought to be a full sync.  Then enabled bandersnatch again via crontab
(this was Sunday night).

It is possible I didn't kick off the full sync properly, as some users have been
mentioning some packages are missing.

> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [zun-ui] init zun-ui-core group

2016-09-09 Thread Paul Belanger
On Fri, Sep 09, 2016 at 09:12:31AM +, Shuu Mutou wrote:
> Hi peers, 
> 
> I created Zun-UI project[1] with help of infra-team.
> [1] https://review.openstack.org/#/c/366489/
> 
> Since no one join in gerrit group zun-ui-core yet, could someone add me to 
> zun-ui-core?
> 
> My gerrit profile is below:
>   Usernameshu.mutow
>   Full NameShu Muto
>   Email Addressshu-mu...@rf.jp.nec.com
> 
Greetings contributor!

You've been added to zun-ui-core, go forth and make awesome things.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Jenkins Job Builder status report

2016-08-13 Thread Paul Belanger
On Sat, Aug 13, 2016 at 11:34:56AM -0400, Kien Ha wrote:
> Hi,
> 
> This week, I have:
> - Update docker build publish to use convert xml
> - Update runscope plugin to use convert xml
> - Update maven-builder to use convert xml
> - Update cfp builder to use convert xml
> - Update findbugs_settings to use convert xml
> - Update helpers.artifactory
> - Fix issue that cause JCloud instance to be set
> - Update jdepend to use convert xml
> - Update Summary Display plugin to use convert xml
> - Update disable-failed-job to use convert xml
> - Update hipchat plugin to use convert xml
> - Add missing docs for maven-targets
> - Fix typo for reporters doc
> 
> Next week I have an exam on Friday, August 19, and so I'll be busy for most
> of my days. I will be available as often as I can if anything is needed.
> 
> Attached below is a link to my JJB project proposal document with a
> complete table of plugins that I have worked on and weekly work log found
> at the bottom of the document [1].
> 
> [1] https://docs.google.com/document/d/17AHluxqiBFcsTCkpyekDOFSTahX50
> pXFmQgjlK-PoEQ/edit

I think it is great you send these every week. Keep up the awesome work.

> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Manually moving irc archives from #kolla to #openstack-kolla

2016-08-10 Thread Paul Belanger
On Tue, Aug 09, 2016 at 08:25:22PM +, Steven Dake (stdake) wrote:
> Hey folks,
> 
> Several months ago Kolla changed its irc channel from #kolla to 
> #openstack-kolla.  We log our irc via eavesdrop.  Is it possible for anyone 
> with infra root access to manually move the irc channel logs from #kolla to 
> #openstack-kolla?  If there is a little data lost from the 1 day overlap 
> change, that is ok.
> 
> If its not possible, I understand.
> 
I'm not sure we've done this before.  We _can_ move the log files to the new
directory, however I think we should leave them for historical purposes.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [packaging-dep][publisher] Further steps to proceed with building/publishing deb packages within OpenStack Infra

2016-07-13 Thread Paul Belanger
On Fri, Jul 08, 2016 at 11:31:56AM -0400, Paul Belanger wrote:
> On Fri, Jul 08, 2016 at 10:36:14AM +0300, Ivan Udovichenko wrote:
> > Hello to you all,
> > 
> > From what I understand we're stuck at the point when we need to publish
> > Debian packages to a package repository [1]. It was also discussed on
> > #openstack-pkg IRC channel.
> > 
> > Just to sum up, we need a job which will take source packages which are
> > published to http://tarballs.openstack.org/packaging-deb/ location and
> > publish them to a package repository somewhere within OpenStack Infra.
> > 
> > Monty could you please elaborate on this question ? I see that you were
> > willing to help with this task. But I'm not sure if you have time for
> > that at this point. Paul also doesn't have patches for it too.
> > 
> > Paul, may be you can help us with this task ?
> > 
> > I just want to be sure that this task worth discussing on openstack-dev
> > mailing list.
> > 
> > Our team may work on it too but we're not sure where is the starting
> > point. We also have package publishers withing our CI and we can adopt
> > it and fit in with OpenStack Infra requirements.
> > 
> > I'd be very grateful to here from you thoughts on this question.
> > 
> 
> I started work on the 1st half of the patches[2][3] last week, which adds
> reprepro into the post pipeline for zuul.  I'll ask for some reviewers today 
> in
> openstack-infra. Once these patches land, I'll create the AFS bits needed.
> 
> I then have to submit the 2nd half of the patches for review which adds logic
> into JJB.  I have that code rewitten locally, and shouldn't take long to land
> next week.
> 
> If things go as expected, I think we'll be able to try a package build / 
> publish
> by next Friday.
> 
> [2] https://review.openstack.org/#/c/337285/
> [3] https://review.openstack.org/#/c/337286
> > 
> > Thank you!
> > 
> > 
> > 
> > [1]
> > http://lists.openstack.org/pipermail/openstack-dev/2016-April/092667.html
> 
Just a follow up to this, I've done all the work needed to host the
debian-openstack[1] repository by openstack-infra.  Right now reprepro is
configured for codename jessie-newton.

As it stands, there is some changes needed on the deb-packaging team side but
moving forward patches to openstack/deb-openstack-pkg-tools should results in
packages being uploaded and imported into reprepro (AFS backend).

[1] http://mirror.dfw.rax.openstack.org/debian-openstack/

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [packaging-dep][publisher] Further steps to proceed with building/publishing deb packages within OpenStack Infra

2016-07-08 Thread Paul Belanger
On Fri, Jul 08, 2016 at 10:36:14AM +0300, Ivan Udovichenko wrote:
> Hello to you all,
> 
> From what I understand we're stuck at the point when we need to publish
> Debian packages to a package repository [1]. It was also discussed on
> #openstack-pkg IRC channel.
> 
> Just to sum up, we need a job which will take source packages which are
> published to http://tarballs.openstack.org/packaging-deb/ location and
> publish them to a package repository somewhere within OpenStack Infra.
> 
> Monty could you please elaborate on this question ? I see that you were
> willing to help with this task. But I'm not sure if you have time for
> that at this point. Paul also doesn't have patches for it too.
> 
> Paul, may be you can help us with this task ?
> 
> I just want to be sure that this task worth discussing on openstack-dev
> mailing list.
> 
> Our team may work on it too but we're not sure where is the starting
> point. We also have package publishers withing our CI and we can adopt
> it and fit in with OpenStack Infra requirements.
> 
> I'd be very grateful to here from you thoughts on this question.
> 

I started work on the 1st half of the patches[2][3] last week, which adds
reprepro into the post pipeline for zuul.  I'll ask for some reviewers today in
openstack-infra. Once these patches land, I'll create the AFS bits needed.

I then have to submit the 2nd half of the patches for review which adds logic
into JJB.  I have that code rewitten locally, and shouldn't take long to land
next week.

If things go as expected, I think we'll be able to try a package build / publish
by next Friday.

[2] https://review.openstack.org/#/c/337285/
[3] https://review.openstack.org/#/c/337286
> 
> Thank you!
> 
> 
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-April/092667.html

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] JJB V2.0 planning

2016-07-06 Thread Paul Belanger
On Mon, Jul 04, 2016 at 05:43:14PM +0100, Darragh Bailey wrote:
> Hi,
> 
> 
> To try and minimise trashing of both core reviews and V2.0 patch author(s),
> I'd like to propose that we pick a time/day every second week for 3-4
> iterations where those interested set aside a set block of time to
> collaborate in getting the main rework patches landed. Consider it a set of
> mini bug days focused on JJB 2.0 API changes.
> 
> To get the ball rolling, I'm going to suggest one of the following 2
> timezones (obviously these suit me best, but I'm available the other days
> as well):
> 14:00-1800 UTC Thurs (Staring 7th July - not available the 14th hence
> suggesting this Thurs)
> 14:00-1800 UTC Tues (Staring 12th July)
> 
> I'm assuming that later in the day for me aligns better with others, but I
> could be very wrong.
> 
> Also thinking that spinning up a temporary public dedicated IRC chat room
> would be helpful here, probably look to avoid using one of the existing
> meeting rooms because I'm assuming that would conflict with other teams,
> unless someone tells me there is a simple solution to this. Only negative I
> can see is that the chats wouldn't be logged.
> 
You may want to check if the #openstack-sprint channel is available on freenode.
We have logging enabled on it and seems like a natural fit for your agenda.

> 
> 
> More info below on why suggest this:
> 
> 
> Having gone through a few cycles where patches get reviewed, reworked and
> then broken by other changes landing, reworked again, reviewed and broken
> again, it can be quite onerous on both author and reviewer getting a change
> that touches a number of places to land as the risk of another patch
> landing causing a merge failure increases dramatically the more places the
> patch touches.
> 
> The set of V2 patches has to bring the existing code through some amount of
> interim steps to make it easy to review, unfortunately given the amount of
> rework to do, the odds of anything else triggering a conflict is pretty
> high and basically faced with the following choices:
> 
>- Take a long time complete the cycle of rework -> review -> rework ->
>break -> rework ->. ...
>- Block landing any changes that touch any of the code impacted by V2
>work until most V2 patches are landed.
> 
> 
> 
> However we can get enough cores on around the same time and try for some
> synchronized collaboration, I think it's probably far easier to land a
> series of patches over a few meetings and get everything far enough along
> with much less workload placed on everyone involved that we can then revert
> back to the more async approach without the same issues around the
> remaining changes.
> 
> Expect that this would only take 3-4 of these to get the major part of the
> rewrite in place.
> 
> Thoughts? Does this work for enough other JJB reviewers?
> 
> 
> -- 
> Darragh Bailey
> "Nothing is foolproof to a sufficiently talented fool"

> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] Ubuntu cloud archive (UCA) AFS mirror now live

2016-06-29 Thread Paul Belanger
Greetings,

Thanks to the work done by Jesse Pretorius (odyssey4me)[1] we now have an UCA
AFS mirror[2] online.

Feel free to update your project testing to use them if needed. As always, if
there is problems feel free to reach out to us in #openstack-infra.

[1] https://review.openstack.org/#/c/330152/
[2] http://mirror.dfw.rax.openstack.org/ubuntu-cloud-archive/

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Switching test jobs to Xenial and the Trusty, Xenial split

2016-06-29 Thread Paul Belanger
On Tue, Jun 28, 2016 at 01:23:25PM -0700, Clark Boylan wrote:
> It is an even year again which means there is a new Ubuntu LTS release
> out (Xenial). We currently have Xenial  images building and uploaded and
> the Openstack Ansible and Openstack Puppet groups are both taking
> advantage of them. Our "base" set of devstack jobs also seem happy on
> Xenial as well (see the experimental results on
> https://review.openstack.org/#/c/330835/1). All this to say I think we
> are ready to do the big switch and get everyone running on Xenial by
> default.
> 
> The tough part of this switch is that we will want to continue running
> stable/liberty and stable/mitaka jobs on Trusty and restrict Xenial to
> master (Newton) and future branches. Now if we want
> gate-tempest-dsvm-full to run on Trusty nodes for stable branches and
> Xenial for master we need some way of signaling that between gearman
> client and workers. I think we have two options for doing this (but
> really hope I am overlooking something because I am not super happy with
> these options).
> 
> The first is we can do what we did for the Precise/Trusty split. We used
> zuul parameter functions to set the node type based on the branch the
> build was for. This meant by default everything on old stable went to
> precise and everything on newer branches went to Trusty. If you wanted
> to do anything else like run on CentOS or Fedora then you had to have an
> explicit override in that parameter function. You can see what that
> looks like in the change that removed it,
> https://review.openstack.org/#/c/260214/.
> 
> The big downside to this option is it created confusion for people not
> realizing they need explicit override to break out of the default
> Precise/Trusty or Trusty/Xenial split. The upside to this is we can
> define each job once and for many jobs/projects/individuals they never
> have to think about where the job will run, it is handled for them
> properly.
> 
> The other option is we can explicitly tie every job to a specific node
> type. This means gate-tempest-dsvm-full becomes
> gate-tempest-dsvm-full-trusty and gate-tempest-dsvm-full-xenial (or
> similar). Then in the zuul layout we default to running old stable
> against any job ending in -trusty and run master against any job ending
> in -xenial. I have mocked this up in
> https://review.openstack.org/335166.
> 
Just to follow up on list, I like this approach, we do it for ansible / puppet
jobs today.  I would like to use gate-tempest-dsvm-full-ubuntu-xenial and
gate-tempest-dsvm-full-ubuntu-trusty as it is more inline with existing job name
formats currently.  It also lets us do gate-tempest-dsvm-full-centos-7 and
gate-tempest-dsvm-full-debian-jessie if needed.

> The downside here is we double our total number of jobs (though we do
> not double the number of gearman job registrations since gearman will
> register a job per node type regardless of option used). It is also much
> more explicit and will likely require a greater understanding of our job
> configs to edit them (this isn't all bad, where things used to mostly
> work before by magic they will now work by explicit design).
> 
> I would like to start transitioning jobs to Xenial soon so feedback on
> this is appreciated. Also if you can come up with better options I would
> love to hear about them as I am not entirely happy about the options
> above.
> 
> Thank you,
> Clark
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [openstack-dev] There is no Jenkins, only Zuul

2016-06-20 Thread Paul Belanger
On Mon, Jun 20, 2016 at 01:47:24PM +0200, Marek Zawadzki wrote:
> Are 3d party CIs affected by this change and if yes how?
> What's the recommendation for newly created 3rd party CIs - should they use
> new zuul only or it's fine to use existing (and tested) jenkins+zuul
> configuration?
> 
3rd party CI should not be affected by recent changes. As there interface to our
CI system is gerrit (review.openstack.org). Zuulv25 is really only meant to be
consumed by openstack-infra as a stepping stone to zuulv3. At this time I would
not recommend people update their CI systems to consume it.

> Second question - if jobs are to be launched by zuul+ansible, how will the
> worker host be chosen, by ansible-launcher? (normally as I understand
> Jenkins master takes care of deciding which hosts are free to run jobs)
> Are there any specs about way of managing workers (adding/removing, setting
> # of executors)?
> 
This was and still is controlled by nodepool[1]. Even when we used jenkins,
nodepool was responsible for creating our jenkins slaves. Under zuulv25, this is
still the case except they are called zuul workers now.

[1] http://docs.openstack.org/infra/nodepool/

> Thank you.
> 
> -marek
> 
> -- 
> Marek Zawadzki
> Mirantis Containerized Control Plane Team
> 
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] jenkins::cgroups usage on long live zuul workers

2016-06-19 Thread Paul Belanger
Greetings,

I spent some time last night looking at ubuntu-xenial and our manifests.
Specifically, I was testing out our wheel-mirror-ubuntu-xenial-amd64.o.o server.
Currently, I am running into an issue where our cgroups[1] configuration[2] is
not working under ubuntu-xenial.  

Since jenkins is now gone, do we still need to worry about cgroups with long
lived zuul-workers? Or do we need to continue supporting cgroups moving forward
with ansible?

Apologies for asking, I've never used cgroups before.

[1] 
http://git.openstack.org/cgit/openstack-infra/puppet-jenkins/tree/manifests/cgroups.pp
[2] 
http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/slave.pp#n31

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] Barcelona summit talk ideas

2016-06-14 Thread Paul Belanger
Greetings,

I wanted to start another etherpad[1] for collaboration on summit talk ideas
related to openstack-infra. I think it worked well for Austin and see no reason
not to do it again.

I've already outlined a few ideas of what interests me (and hopefully others),
please take a moment to review and add your own ideas.

We have until July 13 to submit talks[2]:

 JULY 13, 2016 AT 11:59PM PDT (JULY 14 6:59 UTC) IS THE DEADLINE TO SUBMIT A
 TALK.

[1] https://etherpad.openstack.org/p/barcelona-upstream-openstack-infa
[2] https://www.openstack.org/summit-login/login

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


  1   2   3   >