Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-05-15 Thread Lingxian Kong
Hi,

Maybe I missed the original discussion, I found the 'mutable' configuration
implementation relies on oslo.service, but is there any guide for the
projects using cotyledon instead?

Cheers,
Lingxian Kong


On Wed, May 16, 2018 at 2:46 AM Doug Hellmann  wrote:

> Excerpts from Lance Bragstad's message of 2018-05-14 18:45:49 -0500:
> >
> > On 05/14/2018 05:46 PM, Doug Hellmann wrote:
> > > Excerpts from Lance Bragstad's message of 2018-05-14 15:20:42 -0500:
> > >> On 05/14/2018 02:24 PM, Doug Hellmann wrote:
> > >>> Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500:
> >  On 03/19/2018 09:22 AM, Jim Rollenhagen wrote:
> > > On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann <
> d...@doughellmann.com
> > > > wrote:
> > >
> > > Both of those are good ideas.
> > >
> > >
> > > Agree. I like the socket idea a bit more as I can imagine some
> > > operators don't want config file changes automatically applied. Do
> we
> > > want to choose one to standardize on or allow each project (or
> > > operators, via config) the choice?
> >  Just to recap, keystone would be listening for when it's
> configuration
> >  file changes, and reinitialize the logger if the logging settings
> >  changed, correct?
> > >>> Sort of.
> > >>>
> > >>> Keystone would need to do something to tell oslo.config to re-load
> the
> > >>> config files. In services that rely on oslo.service, this is handled
> > >>> with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so
> > >>> for Keystone you would want to do something similar.
> > >>>
> > >>> That is, you want to wait for an explicit notification from the
> operator
> > >>> that you should reload the config, and not just watch for the file to
> > >>> change. We could talk about using file modification as a trigger, but
> > >>> reloading is something that may need to be staged across several
> > >>> services in order so we chose for the first version to make the
> trigger
> > >>> explicit. Relying on watching files will also fail when the modified
> > >>> data is not in a file (which will be possible when we finish the
> driver
> > >>> work described in
> > >>>
> http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html
> ).
> > >> Hmm, these are good points. I wonder if just converting to use
> > >> oslo.service would be a lower bar then?
> > > I thought keystone had moved away from that direction toward deploying
> > > only within Apache? I may be out of touch, or have misunderstood
> > > something, though.
> >
> > Oh - never mind... For some reason I was thinking there was a way to use
> > oslo.service and Apache.
> >
> > Either way, I'll do some more digging before tomorrow. I have this as a
> > topic on keystone's meeting agenda to go through our options [0]. If we
> > do come up with something that doesn't involve intercepting signals
> > (specifically for the reason noted by Kristi and Jim in the mod_wsgi
> > documentation), should the community goal be updated to include that
> > option? Just thinking that we can't be the only service in this position.
>
> I think we've left the implementation details up to the project
> teams, for just that reason. That said, it would be good to document
> how you do it (either formally or with a mailing list thread).
>
> And FWIW, if what you choose to do is monitor a file, that's fine
> as a trigger. I suggest not using the configuration file itself,
> though, for the reasons mentioned earlier.
>
> Doug
>
> PS - I wonder how Apache deals with reloading its own configuration
> file. Is there some sort of hook you could use?
>
> >
> > [0] https://etherpad.openstack.org/p/keystone-weekly-meeting
> >
> > >
> > > Doug
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Jeremy Stanley
On 2018-05-15 14:52:26 -0600 (-0600), Wesley Hayutin wrote:
[...]
> The content would then sync to a swift file server on a central
> point for ALL the openstack providers or it would be sync'd to
> each cloud?
[...]

We haven't previously requested that all the Infra provider donors
support Swift, and even for the ones who do I don't think we can
count on it being available in every region where we run jobs. I
assumed that implementation would be a single (central) Swift tenant
provided by one of our donors who has it, thus the reason for my
performance concerns at "large" artifact sizes.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Wesley Hayutin
On Tue, May 15, 2018 at 11:42 AM Jeremy Stanley  wrote:

> On 2018-05-15 17:31:07 +0200 (+0200), Bogdan Dobrelya wrote:
> [...]
> > * upload into a swift container, with an automatic expiration set, the
> > de-duplicated and compressed tarball created with something like:
> >   # docker save $(docker images -q) | gzip -1 > all.tar.xz
> > (I expect it will be something like a 2G file)
> > * something similar for DLRN repos prolly, I'm not an expert for this
> part.
> >
> > Then those stored artifacts to be picked up by the next step in the
> graph,
> > deploying undercloud and overcloud in the single step, like:
> > * fetch the swift containers with repos and container images
> [...]
>
> I do worry a little about network fragility here, as well as
> extremely variable performance. Randomly-selected job nodes could be
> shuffling those files halfway across the globe so either upload or
> download (or both) will experience high round-trip latency as well
> as potentially constrained throughput, packet loss,
> disconnects/interruptions and so on... all the things we deal with
> when trying to rely on the Internet, except magnified by the
> quantity of data being transferred about.
>
> Ultimately still worth trying, I think, but just keep in mind it may
> introduce more issues than it solves.
> --
> Jeremy Stanley
>

Question...   If we were to build or update the containers that need an
update and I'm assuming the overcloud images here as well as a parent job.

The content would then sync to a swift file server on a central point for
ALL the openstack providers or it would be sync'd to each cloud?

Not to throw too much cold water on the idea, but...
I wonder if the time to upload and download the containers and images would
significantly reduce any advantage this process has.

Although centralizing the container updates and images on a per check job
basis sounds attractive, I get the sense we need to be very careful and
fully vett the idea.  At the moment it's also an optimization ( maybe ) so
I don't see this as a very high priority atm.

Let's bring the discussion the tripleo meeting next week.  Thanks all!



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Wesley Hayutin
On Mon, May 14, 2018 at 3:16 PM Sagi Shnaidman  wrote:

> Hi, Bogdan
>
> I like the idea with undercloud job. Actually if undercloud fails, I'd
> stop all other jobs, because it doens't make sense to run them. Seeing the
> same failure in 10 jobs doesn't add too much. So maybe adding undercloud
> job as dependency for all multinode jobs would be great idea. I think it's
> worth to check also how long it will delay jobs. Will all jobs wait until
> undercloud job is running? Or they will be aborted when undercloud job is
> failing?
>
> However I'm very sceptical about multinode containers and scenarios jobs,
> they could fail because of very different reasons, like race conditions in
> product or infra issues. Having skipping some of them will lead to more
> rechecks from devs trying to discover all problems in a row, which will
> delay the development process significantly.
>
> Thanks
>

I agree on both counts w/ Sagi here.
Thanks Sagi

>
>
> On Mon, May 14, 2018 at 7:15 PM, Bogdan Dobrelya 
> wrote:
>
>> An update for your review please folks
>>
>> Bogdan Dobrelya  writes:
>>>
>>> Hello.
 As Zuul documentation [0] explains, the names "check", "gate", and
 "post"  may be altered for more advanced pipelines. Is it doable to
 introduce, for particular openstack projects, multiple check
 stages/steps as check-1, check-2 and so on? And is it possible to make
 the consequent steps reusing environments from the previous steps
 finished with?

 Narrowing down to tripleo CI scope, the problem I'd want we to solve
 with this "virtual RFE", and using such multi-staged check pipelines,
 is reducing (ideally, de-duplicating) some of the common steps for
 existing CI jobs.

>>>
>>> What you're describing sounds more like a job graph within a pipeline.
>>> See:
>>> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies
>>> for how to configure a job to run only after another job has completed.
>>> There is also a facility to pass data between such jobs.
>>>
>>> ... (skipped) ...
>>>
>>> Creating a job graph to have one job use the results of the previous job
>>> can make sense in a lot of cases.  It doesn't always save *time*
>>> however.
>>>
>>> It's worth noting that in OpenStack's Zuul, we have made an explicit
>>> choice not to have long-running integration jobs depend on shorter pep8
>>> or tox jobs, and that's because we value developer time more than CPU
>>> time.  We would rather run all of the tests and return all of the
>>> results so a developer can fix all of the errors as quickly as possible,
>>> rather than forcing an iterative workflow where they have to fix all the
>>> whitespace issues before the CI system will tell them which actual tests
>>> broke.
>>>
>>> -Jim
>>>
>>
>> I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for
>> undercloud deployments vs upgrades testing (and some more). Given that
>> those undercloud jobs have not so high fail rates though, I think Emilien
>> is right in his comments and those would buy us nothing.
>>
>> From the other side, what do you think folks of making the
>> tripleo-ci-centos-7-3nodes-multinode depend on
>> tripleo-ci-centos-7-containers-multinode [2]? The former seems quite faily
>> and long running, and is non-voting. It deploys (see featuresets configs
>> [3]*) a 3 nodes in HA fashion. And it seems almost never passing, when the
>> containers-multinode fails - see the CI stats page [4]. I've found only a 2
>> cases there for the otherwise situation, when containers-multinode fails,
>> but 3nodes-multinode passes. So cutting off those future failures via the
>> dependency added, *would* buy us something and allow other jobs to wait
>> less to commence, by a reasonable price of somewhat extended time of the
>> main zuul pipeline. I think it makes sense and that extended CI time will
>> not overhead the RDO CI execution times so much to become a problem. WDYT?
>>
>> [0] https://review.openstack.org/#/c/568275/
>> [1] https://review.openstack.org/#/c/568278/
>> [2] https://review.openstack.org/#/c/568326/
>> [3]
>> https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html
>> [4] http://tripleo.org/cistatus.html
>>
>> * ignore the column 1, it's obsolete, all CI jobs now using configs
>> download AFAICT...
>>
>> --
>> Best regards,
>> Bogdan Dobrelya,
>> Irc #bogdando
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Best regards
> Sagi Shnaidman
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Wesley Hayutin
On Tue, May 15, 2018 at 1:29 PM James E. Blair  wrote:

> Jeremy Stanley  writes:
>
> > On 2018-05-15 09:40:28 -0700 (-0700), James E. Blair wrote:
> > [...]
> >> We're also talking about making a new kind of job which can continue to
> >> run after it's "finished" so that you could use it to do something like
> >> host a container registry that's used by other jobs running on the
> >> change.  We don't have that feature yet, but if we did, would you prefer
> >> to use that instead of the intermediate swift storage?
> >
> > If the subsequent jobs depending on that one get nodes allocated
> > from the same provider, that could solve a lot of the potential
> > network performance risks as well.
>
> That's... tricky.  We're *also* looking at affinity for buildsets, and
> I'm optimistic we'll end up with something there eventually, but that's
> likely to be a more substantive change and probably won't happen as
> soon.  I do agree it will be nice, especially for use cases like this.
>
> -Jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


There is a lot here to unpack and discuss, but I really like the ideas I'm
seeing.
Nice work Bogdan!  I've added it the tripleo meeting agenda for next week
so we can continue socializing the idea and get feedback.

Thanks!

https://etherpad.openstack.org/p/tripleo-meeting-items
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] review runway status

2018-05-15 Thread melanie witt

On Tue, 15 May 2018 14:27:12 +0800, Chen Ch Ji wrote:
Thanks for the sharing, The z/VM driver spec review marked as END DATE: 
2018-05-15
Thanks a couple folks helped a lot on the review and still need more 
review activity on the patch sets, can I apply for extend the end date 
for the run way?


We haven't done any extensions on end dates for blueprints in runways. 
One of the main ideas of runways is to set a consistent time box for 
items in runways and highlight a variety of blueprints throughout the 
release cycle. We have other blueprints in the queue that are waiting 
for their two week time box in a runway too.


Authors can add their blueprints back to the end of the queue if more 
review time is needed and the blueprint will be added to a runway when 
its turn arrives again. So please feel free to do that if more review 
time is needed.


Best,
-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Encrypted swift volumes by default in the undercloud

2018-05-15 Thread Juan Antonio Osorio
Hello!

As part of the work from the Security Squad, we added the ability for the
containerized undercloud to encrypt the overcloud plans. This is done by
enabling Swift's encrypted volumes, which require barbican. Right now it's
turned off, but I would like to enable it by default [1]. What do you folks
think?

[1] https://review.openstack.org/#/c/567200/

BR

-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] The Weekly Owl - 21th Edition

2018-05-15 Thread Emilien Macchi
Welcome to the twenty first edition of a weekly update in TripleO world!
The goal is to provide a short reading (less than 5 minutes) to learn
what's new this week.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/openstack-dev/2018-May/130273.html

+-+
| General announcements |
+-+

+--> Migration to Storyboard is scheduled for rocky-m2, please be aware of
its usage:
https://docs.openstack.org/infra/manual/developers.html#development-workflow
+--> We have 3 more weeks until milestone 2 ! Check-out the schedule:
https://releases.openstack.org/rocky/schedule.html

+--+
| Continuous Integration |
+--+

+--> Ruck is Matt and Rover is Sagi. Please let them know any new CI issue.
+--> centos 7.5 blockers were solved, now looking at how we can improve
centos testing and avoid gate downtime in the future
+--> Master promotion is 0 day, Queens is 6 days, Pike is 6 days and Ocata
is 6 days.
+--> Sprint themes are Upgrade CI (new jobs, forward looking release state
machine, voting jobs) and refactor python-tempestconf for service discovery.
+--> Discussion in progress around zuul v3 multi-staged check pipelines in
TripleO CI
+--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

+-+
| Upgrades |
+-+

+--> Collaboration with CI team for upgrade jobs.
+--> Need reviews on FFU work, check the etherpad.
+--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status

+---+
| Containers |
+---+

+--> Support of image customization during upload in (good) progress.
+--> Efforts arounds all-in-one installer, also good progress.
+--> Preparing next deep dive:
https://etherpad.openstack.org/p/tripleo-deep-dive-containerized-undercloud
+--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status

+--+
| config-download |
+--+

+--> config download status commands and workflows (need reviews)
+--> UI work still ongoing
+--> Major doc update: https://review.openstack.org/#/c/566606
+--> More: https://etherpad.openstack.org/p/tripleo-config-
download-squad-status

+--+
| Integration |
+--+

+--> Need to add support for NodeDataLookup parameter into
"config-download" deployment mechanism (not started yet).
+--> Need review on https://review.openstack.org/#/c/563112/
+--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status

+-+
| UI/CLI |
+-+

+--> Still working on Network Wizard.
+--> Finishing config-download integration
+--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status

+---+
| Validations |
+---+

+--> Custom validations spec ready for reviews:
https://review.openstack.org/#/c/393775/
+--> Mistral workflow plugin
+--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status

+---+
| Networking |
+---+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status

+--+
| Workflows |
+--+

+--> Lot of reviews are needed, please check them out
+--> Workflows should now all use the tripleo.messaging.v1.send workflow to
send messages
+--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status

+---+
| Security |
+---+

+--> Swift object encryption by default in the undercloud
+--> TLS by default for the overcloud
+--> More: https://etherpad.openstack.org/p/tripleo-security-squad

++
| Owl fact  |
++

Barn Owls swallow their prey whole—skin, bones, and all—and they eat up to
1,000 mice each year.
Source: https://www.audubon.org/news/11-fun-facts-about-owls

Thank you all for reading and stay tuned!
--
Your fellow reporter, Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread James E. Blair
Jeremy Stanley  writes:

> On 2018-05-15 09:40:28 -0700 (-0700), James E. Blair wrote:
> [...]
>> We're also talking about making a new kind of job which can continue to
>> run after it's "finished" so that you could use it to do something like
>> host a container registry that's used by other jobs running on the
>> change.  We don't have that feature yet, but if we did, would you prefer
>> to use that instead of the intermediate swift storage?
>
> If the subsequent jobs depending on that one get nodes allocated
> from the same provider, that could solve a lot of the potential
> network performance risks as well.

That's... tricky.  We're *also* looking at affinity for buildsets, and
I'm optimistic we'll end up with something there eventually, but that's
likely to be a more substantive change and probably won't happen as
soon.  I do agree it will be nice, especially for use cases like this.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][barbican][daisycloud][freezer][fuel][heat][pyghmi][rpm-packaging][solum][tatu][trove] pycrypto is dead and insecure, you should migrate

2018-05-15 Thread Matthew Thode
On 18-05-15 12:25:04, Zane Bitter wrote:
> On 13/05/18 13:22, Matthew Thode wrote:
> > This is a reminder to the projects called out that they are using old,
> > unmaintained and probably insecure libraries (it's been dead since
> > 2014).  Please migrate off to use the cryptography library.  We'd like
> > to drop pycrypto from requirements for rocky.
> > 
> > See also, the bug, which has most of you cc'd already.
> > 
> > https://bugs.launchpad.net/openstack-requirements/+bug/1749574
> > 
> > ++-+--+---+
> > | Repository | Filename 
> >| Line | Text
> >   |
> > ++-+--+---+
> > | barbican   | requirements.txt 
> >|   25 | pycrypto>=2.6 # Public Domain   
> >   |
> > | daisycloud-core| code/daisy/requirements.txt  
> >|   17 | pycrypto>=2.6 # Public Domain   
> >   |
> > | freezer| requirements.txt 
> >|   21 | pycrypto>=2.6 # Public Domain   
> >   |
> > | fuel-web   | nailgun/requirements.txt 
> >|   24 | pycrypto>=2.6.1 
> >   |
> > | heat-cfnclient | requirements.txt 
> >|2 | PyCrypto>=2.1.0 
> >   |
> 
> AFAICT heat-cfnclient isn't actually using PyCrypto, even though it's listed
> in requirements.txt. The whole project is just a light wrapper around
> python-boto (though this wasn't always the case IIRC), so I suspect it's
> just relying on boto for all of the auth stuff.
> 

Thanks for the notice, submitted a review to remove it.
https://review.openstack.org/568646

> > | pyghmi | requirements.txt 
> >|1 | pycrypto>=2.6   
> >   |
> > | rpm-packaging  | requirements.txt 
> >|  189 | pycrypto>=2.6  # Public Domain  
> >   |
> > | solum  | requirements.txt 
> >|   24 | pycrypto>=2.6 # Public Domain   
> >   |
> > | tatu   | requirements.txt 
> >|7 | pycrypto>=2.6.1 
> >   |
> > | tatu   | test-requirements.txt
> >|7 | pycrypto>=2.6.1 
> >   |
> > | trove  | 
> > integration/scripts/files/requirements/fedora-requirements.txt  |   30 
> > | pycrypto>=2.6  # Public Domain|
> > | trove  | 
> > integration/scripts/files/requirements/ubuntu-requirements.txt  |   29 
> > | pycrypto>=2.6  # Public Domain|
> > | trove  | requirements.txt 
> >|   47 | pycrypto>=2.6 # Public Domain   
> >   |
> > ++-+--+---+
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Jeremy Stanley
On 2018-05-15 09:40:28 -0700 (-0700), James E. Blair wrote:
[...]
> We're also talking about making a new kind of job which can continue to
> run after it's "finished" so that you could use it to do something like
> host a container registry that's used by other jobs running on the
> change.  We don't have that feature yet, but if we did, would you prefer
> to use that instead of the intermediate swift storage?

If the subsequent jobs depending on that one get nodes allocated
from the same provider, that could solve a lot of the potential
network performance risks as well.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zVMCloudConnector][python-zvm-sdk][requirements] Unblock webob-1.8.1

2018-05-15 Thread Matthew Thode
Please unblock webob-1.8.1, you are the only library holding it back at
this point.  I don't see a way to submit code to the project so I cc'd
the project in launchpad.

https://bugs.launchpad.net/openstack-requirements/+bug/1765748

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-20

2018-05-15 Thread Davanum Srinivas
fyi Jay tried to once -
http://lists.openstack.org/pipermail/openstack-dev/2017-February/thread.html#111511

On Tue, May 15, 2018 at 12:40 PM, Graham Hayes  wrote:
> On 15/05/18 17:33, Tim Bell wrote:
>> From my memory, the LCOO was started in 2015 or 2016. The UC was started at 
>> the end of 2012, start of 2013 (https://www.openstack.org/blog/?p=3777) with 
>> Ryan, JC and I.
>>
>> Tim
>
> Yeap - I miss read what mrhillsman said [0].
>
> The point still stands - I think this does need to be discussed, and the
> outcome published to the list.
>
> Any additional background on why we allowed LCOO to operate like this
> would help a lot.
>
> - Graham
>
> 0 -
> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T15:03:54
>
>> -Original Message-
>> From: Graham Hayes 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Date: Tuesday, 15 May 2018 at 18:22
>> To: "openstack-dev@lists.openstack.org" 
>> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-20
>>
>> ..
>>
>> > # LCOO
>> >
>> > There's been some concern expressed about the The Large Contributing
>> > OpenStack Operators (LCOO) group and the way they operate. They use
>> > an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and
>> > Slack, and have restricted membership. These things tend to not
>> > align with the norms for tool usage and collaboration in OpenStack.
>> > This topic came up in [late
>> > 
>> April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36)
>> >
>> > but is worth revisiting in Vancouver.
>>
>> From what I understand, this group came into being before the UC was
>> created - a joint UC/TC/LCOO sync up in Vancouver is probably a good
>> idea.
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-20

2018-05-15 Thread Davanum Srinivas
fyi Jay tried to once -
http://lists.openstack.org/pipermail/openstack-dev/2017-February/thread.html#111511

On Tue, May 15, 2018 at 12:40 PM, Graham Hayes  wrote:
> On 15/05/18 17:33, Tim Bell wrote:
>> From my memory, the LCOO was started in 2015 or 2016. The UC was started at 
>> the end of 2012, start of 2013 (https://www.openstack.org/blog/?p=3777) with 
>> Ryan, JC and I.
>>
>> Tim
>
> Yeap - I miss read what mrhillsman said [0].
>
> The point still stands - I think this does need to be discussed, and the
> outcome published to the list.
>
> Any additional background on why we allowed LCOO to operate like this
> would help a lot.
>
> - Graham
>
> 0 -
> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T15:03:54
>
>> -Original Message-
>> From: Graham Hayes 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Date: Tuesday, 15 May 2018 at 18:22
>> To: "openstack-dev@lists.openstack.org" 
>> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-20
>>
>> ..
>>
>> > # LCOO
>> >
>> > There's been some concern expressed about the The Large Contributing
>> > OpenStack Operators (LCOO) group and the way they operate. They use
>> > an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and
>> > Slack, and have restricted membership. These things tend to not
>> > align with the norms for tool usage and collaboration in OpenStack.
>> > This topic came up in [late
>> > 
>> April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36)
>> >
>> > but is worth revisiting in Vancouver.
>>
>> From what I understand, this group came into being before the UC was
>> created - a joint UC/TC/LCOO sync up in Vancouver is probably a good
>> idea.
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-20

2018-05-15 Thread Graham Hayes
On 15/05/18 17:33, Tim Bell wrote:
> From my memory, the LCOO was started in 2015 or 2016. The UC was started at 
> the end of 2012, start of 2013 (https://www.openstack.org/blog/?p=3777) with 
> Ryan, JC and I.
> 
> Tim

Yeap - I miss read what mrhillsman said [0].

The point still stands - I think this does need to be discussed, and the
outcome published to the list.

Any additional background on why we allowed LCOO to operate like this
would help a lot.

- Graham

0 -
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T15:03:54

> -Original Message-
> From: Graham Hayes 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Tuesday, 15 May 2018 at 18:22
> To: "openstack-dev@lists.openstack.org" 
> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-20
> 
> ..
> 
> > # LCOO
> > 
> > There's been some concern expressed about the The Large Contributing
> > OpenStack Operators (LCOO) group and the way they operate. They use
> > an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and
> > Slack, and have restricted membership. These things tend to not
> > align with the norms for tool usage and collaboration in OpenStack.
> > This topic came up in [late
> > 
> April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36)
> > 
> > but is worth revisiting in Vancouver.
> 
> From what I understand, this group came into being before the UC was
> created - a joint UC/TC/LCOO sync up in Vancouver is probably a good
> idea.
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread James E. Blair
Bogdan Dobrelya  writes:

> * check out testing depends-on things,

(Zuul should have done this for you, but yes.)

> * build repos and all tripleo docker images from these repos,
> * upload into a swift container, with an automatic expiration set, the
> de-duplicated and compressed tarball created with something like:
>   # docker save $(docker images -q) | gzip -1 > all.tar.xz
> (I expect it will be something like a 2G file)
> * something similar for DLRN repos prolly, I'm not an expert for this part.
>
> Then those stored artifacts to be picked up by the next step in the
> graph, deploying undercloud and overcloud in the single step, like:
> * fetch the swift containers with repos and container images
> * docker load -i all.tar.xz
> * populate images into a local registry, as usual
> * something similar for the repos. Includes an offline yum update (we
> already have a compressed repo, right? profit!)
> * deploy UC
> * deploy OC, if a job wants it
>
> And if OC deployment brought into a separate step, we do not need
> local registries, just 'docker load -i all.tar.xz' issued for
> overcloud nodes should replace image prep workflows and registries,
> AFAICT. Not sure with the repos for that case.
>
> I wish to assist with the upstream infra swift setup for tripleo, and
> that plan, just need a blessing and more hands from tripleo CI squad
> ;)

That sounds about right (at least the Zuul parts :).

We're also talking about making a new kind of job which can continue to
run after it's "finished" so that you could use it to do something like
host a container registry that's used by other jobs running on the
change.  We don't have that feature yet, but if we did, would you prefer
to use that instead of the intermediate swift storage?

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-20

2018-05-15 Thread Tim Bell
From my memory, the LCOO was started in 2015 or 2016. The UC was started at the 
end of 2012, start of 2013 (https://www.openstack.org/blog/?p=3777) with Ryan, 
JC and I.

Tim

-Original Message-
From: Graham Hayes 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, 15 May 2018 at 18:22
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-20

..

> # LCOO
> 
> There's been some concern expressed about the The Large Contributing
> OpenStack Operators (LCOO) group and the way they operate. They use
> an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and
> Slack, and have restricted membership. These things tend to not
> align with the norms for tool usage and collaboration in OpenStack.
> This topic came up in [late
> 
April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36)
> 
> but is worth revisiting in Vancouver.

From what I understand, this group came into being before the UC was
created - a joint UC/TC/LCOO sync up in Vancouver is probably a good
idea.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][barbican][daisycloud][freezer][fuel][heat][pyghmi][rpm-packaging][solum][tatu][trove] pycrypto is dead and insecure, you should migrate

2018-05-15 Thread Zane Bitter

On 13/05/18 13:22, Matthew Thode wrote:

This is a reminder to the projects called out that they are using old,
unmaintained and probably insecure libraries (it's been dead since
2014).  Please migrate off to use the cryptography library.  We'd like
to drop pycrypto from requirements for rocky.

See also, the bug, which has most of you cc'd already.

https://bugs.launchpad.net/openstack-requirements/+bug/1749574

++-+--+---+
| Repository | Filename 
   | Line | Text
  |
++-+--+---+
| barbican   | requirements.txt
|   25 | pycrypto>=2.6 # Public Domain  
   |
| daisycloud-core| code/daisy/requirements.txt 
|   17 | pycrypto>=2.6 # Public Domain  
   |
| freezer| requirements.txt
|   21 | pycrypto>=2.6 # Public Domain  
   |
| fuel-web   | nailgun/requirements.txt
|   24 | pycrypto>=2.6.1
   |
| heat-cfnclient | requirements.txt
|2 | PyCrypto>=2.1.0
   |


AFAICT heat-cfnclient isn't actually using PyCrypto, even though it's 
listed in requirements.txt. The whole project is just a light wrapper 
around python-boto (though this wasn't always the case IIRC), so I 
suspect it's just relying on boto for all of the auth stuff.



| pyghmi | requirements.txt
|1 | pycrypto>=2.6  
   |
| rpm-packaging  | requirements.txt
|  189 | pycrypto>=2.6  # Public Domain 
   |
| solum  | requirements.txt
|   24 | pycrypto>=2.6 # Public Domain  
   |
| tatu   | requirements.txt
|7 | pycrypto>=2.6.1
   |
| tatu   | test-requirements.txt   
|7 | pycrypto>=2.6.1
   |
| trove  | 
integration/scripts/files/requirements/fedora-requirements.txt  |   30 | 
pycrypto>=2.6  # Public Domain|
| trove  | 
integration/scripts/files/requirements/ubuntu-requirements.txt  |   29 | 
pycrypto>=2.6  # Public Domain|
| trove  | requirements.txt
|   47 | pycrypto>=2.6 # Public Domain  
   |
++-+--+---+



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-20

2018-05-15 Thread Graham Hayes
On 15/05/18 16:31, Chris Dent wrote:
> 
> HTML: https://anticdent.org/tc-report-18-20.html
> 
> Trying to write a TC report after a gap of 3 weeks is hard enough,
> but when that gap involves some time off, the TC elections, and the
> run up to summit (next week in
> [Vancouver](https://www.openstack.org/summit/vancouver-2018/)) then
> it gets bewildering. Rather than trying to give anything like a full
> summary, I'll go for some highlights.
> 
> Be aware that since next week is summit and I'll be travelling the
> week after, there will be another gap in reports.
> 
> # Elections
> 
> The elections were for seven positions. Of those, three are new to
> the TC: Graham Hayes, Mohammed Naser, Zane Bitter. Having new people
> is _great_. There's a growing sense that the TC needs to take a more
> active role in helping adapt the culture of OpenStack to its
> changing place in the world (see some of the comments below). Having
> new people helps with that greatly.
> 
> Doug Hellman has become the chair of the TC, taking the seat long
> held by Thierry. This is the first time (that I'm aware of) that a
> non-Foundation-staff individual has been the chair.
> 
> One of the most interesting parts of the election process were the
> email threads started by Doug. There's hope that existing TC
> members that were not elected in this cycle, those that have
> departed, and anyone else will provide their answers to them too. An
> [email
> reminder](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130382.html)
> 
> exists.
> 
> # Summit
> 
> Is next week, in Vancouver. The TC has several
> [Forum](https://wiki.openstack.org/wiki/Forum/Vancouver2018)
> sessions planned including:
> 
> * [S release
>   goals](https://etherpad.openstack.org/p/YVR-S-release-goals)
> * [Project boundaries and what is
>  
> OpenStack](https://etherpad.openstack.org/p/YVR-forum-TC-project-boundaries)
> 
> * [TC
>   Retrospective](https://etherpad.openstack.org/p/YVR-tc-retrospective)
> * [Cross Community
>  
> Governance](https://etherpad.openstack.org/p/YVR-cross-osf-tech-governance)
> 
> # Corporate Foundation Contributions
> 
> There's ongoing discussion about how [to
> measure](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-24.log.html#t2018-04-24T15:43:59)
> 
> upstream contribution from corporate Foundation members and what to
> do if contribution seems lacking. Part of the reason this came up
> was because the mode of contribution from new platinum member,
> Tencent, is not clear. For a platinum member, it should be
> _obvious_.

This is a very important point. By adding a company (especially at this
level) we grant them a certain amount of our credibility. We need to
be sure that this is earned by the new member.

> # LCOO
> 
> There's been some concern expressed about the The Large Contributing
> OpenStack Operators (LCOO) group and the way they operate. They use
> an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and
> Slack, and have restricted membership. These things tend to not
> align with the norms for tool usage and collaboration in OpenStack.
> This topic came up in [late
> April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36)
> 
> but is worth revisiting in Vancouver.

From what I understand, this group came into being before the UC was
created - a joint UC/TC/LCOO sync up in Vancouver is probably a good
idea.

> # Constellations
> 
> One of the things that came out in election campaigning is that
> OpenStack needs to be more clear about the many ways that OpenStack
> can be used, in part as a way of being more clear about what
> OpenStack _is_. Constellations are one way to do this and work has
> begun on one for [Scientific
> Computing](https://review.openstack.org/#/c/565466/). There's some
> discussion there on what a constellation is supposed to accomplish.
> If you have an opinion, you should comment.
> 
> # Board Meeting
> 
> The day before summit there is a "combined leadership" meeting with
> the Foundation Board, the User Committee and the Technical
> Committee. Doug has posted a [review of the
> agenda](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130336.html).
> 
> These meetings are open to any Foundation members and often involve
> a lot of insight into the future of OpenStack. And snacks.
> 
> # Feedback, Leadership and Dictatorship of the Projects
> 
> Zane started [an email
> thread](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130375.html)
> 
> about ways to replace or augment the once large and positive
> feedback loop that was present in earlier days of OpenStack. That
> now has the potential to trap us into what he describes as a "local
> maximum". The thread eventually evolved into concerns that the
> individual sub-projects in OpenStack can sometimes have too much
> power and identity compared to the overarching project, leading to
> isolation and difficulty 

Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Bogdan Dobrelya

On 5/15/18 5:08 PM, Sagi Shnaidman wrote:

Bogdan,

I think before final decisions we need to know exactly - what a price we 
need to pay? Without exact numbers it will be difficult to discuss about.
I we need to wait 80 mins of undercloud-containers job to finish for 
starting all other jobs, it will be about 4.5 hours to wait for result 
(+ 4.5 hours in gate) which is too big price imho and doesn't worth an 
effort.


What are exact numbers we are talking about?


I fully agree but can't have those numbers, sorry! As I noted above, 
those are definitely sitting in openstack-infra's elastic search DB, 
just needed to get extracted with some assistance of folks who know more 
on that!




Thanks


On Tue, May 15, 2018 at 3:07 PM, Bogdan Dobrelya > wrote:


Let me clarify the problem I want to solve with pipelines.

It is getting *hard* to develop things and move patches to the Happy
End (merged):
- Patches wait too long for CI jobs to start. It should be minutes
and not hours of waiting.
- If a patch fails a job w/o a good reason, the consequent recheck
operation repeat waiting all over again.

How pipelines may help solve it?
Pipelines only alleviate, not solve the problem of waiting. We only
want to build pipelines for the main zuul check process, omitting
gating and RDO CI (for now).

Where are two cases to consider:
- A patch succeeds all checks
- A patch fails a check with dependencies

The latter cases benefit us the most, when pipelines are designed
like it is proposed here. So that any jobs expected to fail, when a
dependency fails, will be omitted from execution. This saves HW
resources and zuul queue places a lot, making it available for other
patches and allowing those to have CI jobs started faster (less
waiting!). When we have "recheck storms", like because of some known
intermittent side issue, that outcome is multiplied by the recheck
storm um... level, and delivers even better and absolutely amazing
results :) Zuul queue will not be growing insanely getting
overwhelmed by multiple clones of the rechecked jobs highly likely
deemed to fail, and blocking other patches what might have chances
to pass checks as non-affected by that intermittent issue.

And for the first case, when a patch succeeds, it takes some
extended time, and that is the price to pay. How much time it takes
to finish in a pipeline fully depends on implementation.

The effectiveness could only be measured with numbers extracted from
elastic search data, like average time to wait for a job to start,
success vs fail execution time percentiles for a job, average amount
of rechecks, recheck storms history et al. I don't have that data
and don't know how to get it. Any help with that is very appreciated
and could really help to move the proposed patches forward or
decline it. And we could then compare "before" and "after" as well.

I hope that explains the problem scope and the methodology to
address that.


On 5/14/18 6:15 PM, Bogdan Dobrelya wrote:

An update for your review please folks

Bogdan Dobrelya http://redhat.com>>
writes:

Hello.
As Zuul documentation [0] explains, the names "check",
"gate", and
"post"  may be altered for more advanced pipelines. Is
it doable to
introduce, for particular openstack projects, multiple check
stages/steps as check-1, check-2 and so on? And is it
possible to make
the consequent steps reusing environments from the
previous steps
finished with?

Narrowing down to tripleo CI scope, the problem I'd want
we to solve
with this "virtual RFE", and using such multi-staged
check pipelines,
is reducing (ideally, de-duplicating) some of the common
steps for
existing CI jobs.


What you're describing sounds more like a job graph within a
pipeline.
See:

https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies



for how to configure a job to run only after another job has
completed.
There is also a facility to pass data between such jobs.

... (skipped) ...

Creating a job graph to have one job use the results of the
previous job
can make sense in a lot of cases.  It doesn't always save *time*
however.

It's worth noting that in OpenStack's Zuul, we have made an
explicit
choice not to have 

Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Jeremy Stanley
On 2018-05-15 17:31:07 +0200 (+0200), Bogdan Dobrelya wrote:
[...]
> * upload into a swift container, with an automatic expiration set, the
> de-duplicated and compressed tarball created with something like:
>   # docker save $(docker images -q) | gzip -1 > all.tar.xz
> (I expect it will be something like a 2G file)
> * something similar for DLRN repos prolly, I'm not an expert for this part.
> 
> Then those stored artifacts to be picked up by the next step in the graph,
> deploying undercloud and overcloud in the single step, like:
> * fetch the swift containers with repos and container images
[...]

I do worry a little about network fragility here, as well as
extremely variable performance. Randomly-selected job nodes could be
shuffling those files halfway across the globe so either upload or
download (or both) will experience high round-trip latency as well
as potentially constrained throughput, packet loss,
disconnects/interruptions and so on... all the things we deal with
when trying to rely on the Internet, except magnified by the
quantity of data being transferred about.

Ultimately still worth trying, I think, but just keep in mind it may
introduce more issues than it solves.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 18-20

2018-05-15 Thread Chris Dent


HTML: https://anticdent.org/tc-report-18-20.html

Trying to write a TC report after a gap of 3 weeks is hard enough,
but when that gap involves some time off, the TC elections, and the
run up to summit (next week in
[Vancouver](https://www.openstack.org/summit/vancouver-2018/)) then
it gets bewildering. Rather than trying to give anything like a full
summary, I'll go for some highlights.

Be aware that since next week is summit and I'll be travelling the
week after, there will be another gap in reports.

# Elections

The elections were for seven positions. Of those, three are new to
the TC: Graham Hayes, Mohammed Naser, Zane Bitter. Having new people
is _great_. There's a growing sense that the TC needs to take a more
active role in helping adapt the culture of OpenStack to its
changing place in the world (see some of the comments below). Having
new people helps with that greatly.

Doug Hellman has become the chair of the TC, taking the seat long
held by Thierry. This is the first time (that I'm aware of) that a
non-Foundation-staff individual has been the chair.

One of the most interesting parts of the election process were the
email threads started by Doug. There's hope that existing TC
members that were not elected in this cycle, those that have
departed, and anyone else will provide their answers to them too. An
[email
reminder](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130382.html)
exists.

# Summit

Is next week, in Vancouver. The TC has several
[Forum](https://wiki.openstack.org/wiki/Forum/Vancouver2018)
sessions planned including:

* [S release
  goals](https://etherpad.openstack.org/p/YVR-S-release-goals)
* [Project boundaries and what is
  OpenStack](https://etherpad.openstack.org/p/YVR-forum-TC-project-boundaries)
* [TC
  Retrospective](https://etherpad.openstack.org/p/YVR-tc-retrospective)
* [Cross Community
  Governance](https://etherpad.openstack.org/p/YVR-cross-osf-tech-governance)

# Corporate Foundation Contributions

There's ongoing discussion about how [to
measure](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-24.log.html#t2018-04-24T15:43:59)
upstream contribution from corporate Foundation members and what to
do if contribution seems lacking. Part of the reason this came up
was because the mode of contribution from new platinum member,
Tencent, is not clear. For a platinum member, it should be
_obvious_.

# LCOO

There's been some concern expressed about the The Large Contributing
OpenStack Operators (LCOO) group and the way they operate. They use
an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and
Slack, and have restricted membership. These things tend to not
align with the norms for tool usage and collaboration in OpenStack.
This topic came up in [late
April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36)
but is worth revisiting in Vancouver.

# Constellations

One of the things that came out in election campaigning is that
OpenStack needs to be more clear about the many ways that OpenStack
can be used, in part as a way of being more clear about what
OpenStack _is_. Constellations are one way to do this and work has
begun on one for [Scientific
Computing](https://review.openstack.org/#/c/565466/). There's some
discussion there on what a constellation is supposed to accomplish.
If you have an opinion, you should comment.

# Board Meeting

The day before summit there is a "combined leadership" meeting with
the Foundation Board, the User Committee and the Technical
Committee. Doug has posted a [review of the
agenda](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130336.html).
These meetings are open to any Foundation members and often involve
a lot of insight into the future of OpenStack. And snacks.

# Feedback, Leadership and Dictatorship of the Projects

Zane started [an email
thread](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130375.html)
about ways to replace or augment the once large and positive
feedback loop that was present in earlier days of OpenStack. That
now has the potential to trap us into what he describes as a "local
maximum". The thread eventually evolved into concerns that the
individual sub-projects in OpenStack can sometimes have too much
power and identity compared to the overarching project, leading to
isolation and difficulty getting overarching things done. There was a
bit of discussion about this [in
IRC](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-11.log.html#t2018-05-11T19:13:02)
but the important parts are in the several messages in the thread.

Some people think that the community goals help to fill some of this
void. Others thinks this is not quite enough and perhaps project
teams as a point of emphasis is ["no longer
optimal"](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130436.html).

But in all this talk of change, how do we do the work if we're

Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Bogdan Dobrelya

On 5/15/18 4:30 PM, James E. Blair wrote:

Bogdan Dobrelya  writes:


Added a few more patches [0], [1] by the discussion results. PTAL folks.
Wrt remaining in the topic, I'd propose to give it a try and revert
it, if it proved to be worse than better.
Thank you for feedback!

The next step could be reusing artifacts, like DLRN repos and
containers built for patches and hosted undercloud, in the consequent
pipelined jobs. But I'm not sure how to even approach that.

[0] https://review.openstack.org/#/c/568536/
[1] https://review.openstack.org/#/c/568543/


In order to use an artifact in a dependent job, you need to store it
somewhere and retrieve it.

In the parent job, I'd recommend storing the artifact on the log server
(in an "artifacts/" directory) next to the job's logs.  The log server
is essentially a time-limited artifact repository keyed on the zuul
build UUID.

Pass the URL to the child job using the zuul_return Ansible module.

Have the child job fetch it from the log server using the URL it gets.

However, don't do that if the artifacts are very large -- more than a
few MB -- we'll end up running out of space quickly.

In that case, please volunteer some time to help the infra team set up a
swift container to store these artifacts.  We don't need to *run*
swift -- we have clouds with swift already.  We just need some help
setting up accounts, secrets, and Ansible roles to use it from Zuul.


Thank you, that's a good proposal! So when we have done that upstream 
infra swift setup for tripleo, the 1st step in the job dependency graph 
may be using quickstart to do something like:


* check out testing depends-on things,
* build repos and all tripleo docker images from these repos,
* upload into a swift container, with an automatic expiration set, the 
de-duplicated and compressed tarball created with something like:

  # docker save $(docker images -q) | gzip -1 > all.tar.xz
(I expect it will be something like a 2G file)
* something similar for DLRN repos prolly, I'm not an expert for this part.

Then those stored artifacts to be picked up by the next step in the 
graph, deploying undercloud and overcloud in the single step, like:

* fetch the swift containers with repos and container images
* docker load -i all.tar.xz
* populate images into a local registry, as usual
* something similar for the repos. Includes an offline yum update (we 
already have a compressed repo, right? profit!)

* deploy UC
* deploy OC, if a job wants it

And if OC deployment brought into a separate step, we do not need local 
registries, just 'docker load -i all.tar.xz' issued for overcloud nodes 
should replace image prep workflows and registries, AFAICT. Not sure 
with the repos for that case.


I wish to assist with the upstream infra swift setup for tripleo, and 
that plan, just need a blessing and more hands from tripleo CI squad ;)




-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [SIG][Edge-computing][FEMDC] Wed. 16 May - FEMDC IRC Meeting 15:00 UTC

2018-05-15 Thread Dimitri Pertin

Dear all,

Here is a gentle reminder regarding the FEMDC meeting that was postponed 
from last week to tomorrow: May, the 16th at 15:00 UTC.


As a consequence, the meeting will be held on #edge-computing-irc

This meeting will focus on the preparation of the Vancouver summit 
(presentations, F2F sessions, ...). You can already check and fill this 
pad with you wishes/ideas:

https://etherpad.openstack.org/p/FEMDC_Vancouver

As usually, a draft of the agenda is available at line 550 and you are 
very welcome to add any item:

https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2018

Best regards,

Dimitri



 Forwarded Message 
Subject: [Edge-computing] [FEMDC] IRC meeting postponed to next Wednesday
Date: Wed, 9 May 2018 15:50:45 +0200 (CEST)
From: lebre.adr...@free.fr
To: OpenStack Development Mailing List (not for usage questions) 
, openstack-s...@lists.openstack.org, 
edge-comput...@lists.openstack.org


Dear all,
Neither Paul-Andre nor me can chair the meeting today so we propose to 
postpone it for one week. The agenda will be delivered soon but you can 
consider that next meeting will focus on the preparation of the 
Vancouver summit (presentations, F2F meetings...).

Best regards, ad_ri3n_

___
Edge-computing mailing list
edge-comput...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Sagi Shnaidman
Bogdan,

I think before final decisions we need to know exactly - what a price we
need to pay? Without exact numbers it will be difficult to discuss about.
I we need to wait 80 mins of undercloud-containers job to finish for
starting all other jobs, it will be about 4.5 hours to wait for result (+
4.5 hours in gate) which is too big price imho and doesn't worth an effort.

What are exact numbers we are talking about?

Thanks


On Tue, May 15, 2018 at 3:07 PM, Bogdan Dobrelya 
wrote:

> Let me clarify the problem I want to solve with pipelines.
>
> It is getting *hard* to develop things and move patches to the Happy End
> (merged):
> - Patches wait too long for CI jobs to start. It should be minutes and not
> hours of waiting.
> - If a patch fails a job w/o a good reason, the consequent recheck
> operation repeat waiting all over again.
>
> How pipelines may help solve it?
> Pipelines only alleviate, not solve the problem of waiting. We only want
> to build pipelines for the main zuul check process, omitting gating and RDO
> CI (for now).
>
> Where are two cases to consider:
> - A patch succeeds all checks
> - A patch fails a check with dependencies
>
> The latter cases benefit us the most, when pipelines are designed like it
> is proposed here. So that any jobs expected to fail, when a dependency
> fails, will be omitted from execution. This saves HW resources and zuul
> queue places a lot, making it available for other patches and allowing
> those to have CI jobs started faster (less waiting!). When we have "recheck
> storms", like because of some known intermittent side issue, that outcome
> is multiplied by the recheck storm um... level, and delivers even better
> and absolutely amazing results :) Zuul queue will not be growing insanely
> getting overwhelmed by multiple clones of the rechecked jobs highly likely
> deemed to fail, and blocking other patches what might have chances to pass
> checks as non-affected by that intermittent issue.
>
> And for the first case, when a patch succeeds, it takes some extended
> time, and that is the price to pay. How much time it takes to finish in a
> pipeline fully depends on implementation.
>
> The effectiveness could only be measured with numbers extracted from
> elastic search data, like average time to wait for a job to start, success
> vs fail execution time percentiles for a job, average amount of rechecks,
> recheck storms history et al. I don't have that data and don't know how to
> get it. Any help with that is very appreciated and could really help to
> move the proposed patches forward or decline it. And we could then compare
> "before" and "after" as well.
>
> I hope that explains the problem scope and the methodology to address that.
>
>
> On 5/14/18 6:15 PM, Bogdan Dobrelya wrote:
>
>> An update for your review please folks
>>
>> Bogdan Dobrelya  writes:
>>>
>>> Hello.
 As Zuul documentation [0] explains, the names "check", "gate", and
 "post"  may be altered for more advanced pipelines. Is it doable to
 introduce, for particular openstack projects, multiple check
 stages/steps as check-1, check-2 and so on? And is it possible to make
 the consequent steps reusing environments from the previous steps
 finished with?

 Narrowing down to tripleo CI scope, the problem I'd want we to solve
 with this "virtual RFE", and using such multi-staged check pipelines,
 is reducing (ideally, de-duplicating) some of the common steps for
 existing CI jobs.

>>>
>>> What you're describing sounds more like a job graph within a pipeline.
>>> See: https://docs.openstack.org/infra/zuul/user/config.html#attr-
>>> job.dependencies
>>> for how to configure a job to run only after another job has completed.
>>> There is also a facility to pass data between such jobs.
>>>
>>> ... (skipped) ...
>>>
>>> Creating a job graph to have one job use the results of the previous job
>>> can make sense in a lot of cases.  It doesn't always save *time*
>>> however.
>>>
>>> It's worth noting that in OpenStack's Zuul, we have made an explicit
>>> choice not to have long-running integration jobs depend on shorter pep8
>>> or tox jobs, and that's because we value developer time more than CPU
>>> time.  We would rather run all of the tests and return all of the
>>> results so a developer can fix all of the errors as quickly as possible,
>>> rather than forcing an iterative workflow where they have to fix all the
>>> whitespace issues before the CI system will tell them which actual tests
>>> broke.
>>>
>>> -Jim
>>>
>>
>> I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for
>> undercloud deployments vs upgrades testing (and some more). Given that
>> those undercloud jobs have not so high fail rates though, I think Emilien
>> is right in his comments and those would buy us nothing.
>>
>>  From the other side, what do you think folks of making the
>> tripleo-ci-centos-7-3nodes-multinode depend on
>> 

Re: [openstack-dev] [tripleo] Migration to Storyboard

2018-05-15 Thread Alex Schultz
Bumping this up so folks can review this.  It was mentioned in this
week's meeting that it would be a good idea for folks to take a look
at Storyboard to get familiar with it.  The upstream docs have been
updated[0] to point to the differences when dealing with proposed
patches.  Please take some time to review this and raise any
concerns/issues now.

Thanks,
-Alex

[0] https://docs.openstack.org/infra/manual/developers.html#development-workflow

On Wed, May 9, 2018 at 1:24 PM, Alex Schultz  wrote:
> Hello tripleo folks,
>
> So we've been experimenting with migrating some squads over to
> storyboard[0] but this seems to be causing more issues than perhaps
> it's worth.  Since the upstream community would like to standardize on
> Storyboard at some point, I would propose that we do a cut over of all
> the tripleo bugs/blueprints from Launchpad to Storyboard.
>
> In the irc meeting this week[1], I asked that the tripleo-ci team make
> sure the existing scripts that we use to monitor bugs for CI support
> Storyboard.  I would consider this a prerequisite for the migration.
> I am thinking it would be beneficial to get this done before or as
> close to M2.
>
> Thoughts, concerns, etc?
>
> Thanks,
> -Alex
>
> [0] https://storyboard.openstack.org/#!/project_group/76
> [1] 
> http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-05-08-14.00.log.html#l-42

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Containerized Undercloud deep-dive

2018-05-15 Thread Emilien Macchi
Dan and I are organizing a deep-dive session focused on the containerized
undercloud.

https://etherpad.openstack.org/p/tripleo-deep-dive-containerized-undercloud

We proposed a date + list of topics but feel free to comment and ask for
topics/questions.
Thanks,
-- 
Emilien & Dan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-05-15 Thread Doug Hellmann
Excerpts from Lance Bragstad's message of 2018-05-14 18:45:49 -0500:
> 
> On 05/14/2018 05:46 PM, Doug Hellmann wrote:
> > Excerpts from Lance Bragstad's message of 2018-05-14 15:20:42 -0500:
> >> On 05/14/2018 02:24 PM, Doug Hellmann wrote:
> >>> Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500:
>  On 03/19/2018 09:22 AM, Jim Rollenhagen wrote:
> > On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann  > > wrote:
> >
> > Both of those are good ideas.
> >
> >
> > Agree. I like the socket idea a bit more as I can imagine some
> > operators don't want config file changes automatically applied. Do we
> > want to choose one to standardize on or allow each project (or
> > operators, via config) the choice?
>  Just to recap, keystone would be listening for when it's configuration
>  file changes, and reinitialize the logger if the logging settings
>  changed, correct?
> >>> Sort of.
> >>>
> >>> Keystone would need to do something to tell oslo.config to re-load the
> >>> config files. In services that rely on oslo.service, this is handled
> >>> with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so
> >>> for Keystone you would want to do something similar.
> >>>
> >>> That is, you want to wait for an explicit notification from the operator
> >>> that you should reload the config, and not just watch for the file to
> >>> change. We could talk about using file modification as a trigger, but
> >>> reloading is something that may need to be staged across several
> >>> services in order so we chose for the first version to make the trigger
> >>> explicit. Relying on watching files will also fail when the modified
> >>> data is not in a file (which will be possible when we finish the driver
> >>> work described in
> >>> http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html).
> >> Hmm, these are good points. I wonder if just converting to use
> >> oslo.service would be a lower bar then?
> > I thought keystone had moved away from that direction toward deploying
> > only within Apache? I may be out of touch, or have misunderstood
> > something, though.
> 
> Oh - never mind... For some reason I was thinking there was a way to use
> oslo.service and Apache.
> 
> Either way, I'll do some more digging before tomorrow. I have this as a
> topic on keystone's meeting agenda to go through our options [0]. If we
> do come up with something that doesn't involve intercepting signals
> (specifically for the reason noted by Kristi and Jim in the mod_wsgi
> documentation), should the community goal be updated to include that
> option? Just thinking that we can't be the only service in this position.

I think we've left the implementation details up to the project
teams, for just that reason. That said, it would be good to document
how you do it (either formally or with a mailing list thread).

And FWIW, if what you choose to do is monitor a file, that's fine
as a trigger. I suggest not using the configuration file itself,
though, for the reasons mentioned earlier.

Doug

PS - I wonder how Apache deals with reloading its own configuration
file. Is there some sort of hook you could use?

> 
> [0] https://etherpad.openstack.org/p/keystone-weekly-meeting
> 
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Technical Committee Update, 14 May

2018-05-15 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2018-05-15 10:38:36 +0200:
> Doug Hellmann wrote:
> > We will also hold a retrospective for the TC as a team on Monday
> > at the Forum.  Please be prepared to discuss things you think are
> > going well, things you think we need to change, items from our
> > backlog that you would like to work on, etc. [10]
> > 
> > [10] 
> > https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21740/tc-retrospective
> 
> You mean Thursday, right ?
> 

Oops, yes, Thursday.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread James E. Blair
Bogdan Dobrelya  writes:

> Added a few more patches [0], [1] by the discussion results. PTAL folks.
> Wrt remaining in the topic, I'd propose to give it a try and revert
> it, if it proved to be worse than better.
> Thank you for feedback!
>
> The next step could be reusing artifacts, like DLRN repos and
> containers built for patches and hosted undercloud, in the consequent
> pipelined jobs. But I'm not sure how to even approach that.
>
> [0] https://review.openstack.org/#/c/568536/
> [1] https://review.openstack.org/#/c/568543/

In order to use an artifact in a dependent job, you need to store it
somewhere and retrieve it.

In the parent job, I'd recommend storing the artifact on the log server
(in an "artifacts/" directory) next to the job's logs.  The log server
is essentially a time-limited artifact repository keyed on the zuul
build UUID.

Pass the URL to the child job using the zuul_return Ansible module.

Have the child job fetch it from the log server using the URL it gets.

However, don't do that if the artifacts are very large -- more than a
few MB -- we'll end up running out of space quickly.

In that case, please volunteer some time to help the infra team set up a
swift container to store these artifacts.  We don't need to *run*
swift -- we have clouds with swift already.  We just need some help
setting up accounts, secrets, and Ansible roles to use it from Zuul.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Jeremy Stanley
On 2018-05-15 15:22:14 +0200 (+0200), Bogdan Dobrelya wrote:
[...]
> I mean pipelines as jobs executed in batches, ordered via defined
> dependencies, like gitlab pipelines [0]. And those batches can
> also be thought of steps, or whatever we call that.
[...]

Got it. So Zuul refers to that relationship as a job dependency:

https://zuul-ci.org/docs/zuul/user/config.html#attr-job.dependencies

To be clearer, you might refer to this as dependent job ordering or
a job dependency graph.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] psycopg2 wheel packaging issues

2018-05-15 Thread Stephen Finucane
On Tue, 2018-05-15 at 07:24 -0400, Doug Hellmann wrote:
> Excerpts from Stephen Finucane's message of 2018-05-15 11:44:11 +0100:
> > I imagine most people have been seeing warnings like the one below
> > raised by various openstack packages recently:
> > 
> >   .tox/py27/lib/python2.7/site-packages/psycopg2/__init__.py:144: 
> > UserWarning: The psycopg2
> >   wheel package will be renamed from release 2.8; in order to keep 
> > installing from binary
> >   please use "pip install psycopg2-binary" instead. For details see:
> >   .
> > 
> > Based on this warning, I had done what seemed to be the obvious thing
> > to do and proposed adding psycopg2-binary to the list of global
> > requirements [1]. This would allow us to replace all references to
> > psycopg2 with psycopg2-wheel in individual projects. However, upon
> > further investigation it seems this is not really an option since the
> > two packages exist in the same namespace and will clobber each other.
> > I've now abandoned this patch.
> > 
> > Does anyone with stronger Python packaging-fu than I have a better
> > solution for the psycopg2 folks? There's a detailed description of why
> > this was necessary on GitHub [2] along with some potential resolutions,
> > none of which seem to be acceptable. If nothing better is possible, it
> > seems we'll simply have to live with (or silence) these warnings in
> > psycopg2 2.7.x and start installing libpg again once 2.8 is released.
> > 
> > Cheers,
> > Stephen
> > 
> > [1] https://review.openstack.org/#/c/561924/
> > [2] https://github.com/psycopg/psycopg2/issues/674
> > 
> 
> Bundling an SSL library seems like a particularly bad situation, but if
> its ABI isn't stable it may be all they can do.
> 
> Perhaps some of the folks in the community who actually use Postgresql
> can get involved with helping the upstream maintainers of psycopg and
> libpg sort things out.

Yes, this would be my hope.

> In the mean time, is there any reason we can't just continue to
> install psycopg2 from source in our gate jobs after 2.8? If the
> wheel packages for psycopg2 2.7.x are bad perhaps we can come up
> with a way to pass --no-binary when installing it, but it's not
> clear if we need to. Does the bug affect us?

The only reason we might have issues is the libpq dependency. This was
required in 2.6 and will be required once again in 2.8. If this hasn't
been dropped from the list of requirements then we won't see any
breakages. If we do, we know where the issue lies.

Stephen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Bogdan Dobrelya

On 5/15/18 2:30 PM, Jeremy Stanley wrote:

On 2018-05-15 14:07:56 +0200 (+0200), Bogdan Dobrelya wrote:
[...]

How pipelines may help solve it?
Pipelines only alleviate, not solve the problem of waiting. We only want to
build pipelines for the main zuul check process, omitting gating and RDO CI
(for now).

Where are two cases to consider:
- A patch succeeds all checks
- A patch fails a check with dependencies

The latter cases benefit us the most, when pipelines are designed like it is
proposed here. So that any jobs expected to fail, when a dependency fails,
will be omitted from execution.

[...]

Your choice of terminology is making it hard to follow this
proposal. You seem to mean something other than
https://zuul-ci.org/docs/zuul/user/config.html#pipeline when you use
the term "pipeline" (which gets confusing very quickly for anyone
familiar with Zuul configuration concepts).


Indeed, sorry for that confusion. I mean pipelines as jobs executed in 
batches, ordered via defined dependencies, like gitlab pipelines [0]. 
And those batches can also be thought of steps, or whatever we call that.


[0] https://docs.gitlab.com/ee/ci/pipelines.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Jeremy Stanley
On 2018-05-15 14:07:56 +0200 (+0200), Bogdan Dobrelya wrote:
[...]
> How pipelines may help solve it?
> Pipelines only alleviate, not solve the problem of waiting. We only want to
> build pipelines for the main zuul check process, omitting gating and RDO CI
> (for now).
> 
> Where are two cases to consider:
> - A patch succeeds all checks
> - A patch fails a check with dependencies
> 
> The latter cases benefit us the most, when pipelines are designed like it is
> proposed here. So that any jobs expected to fail, when a dependency fails,
> will be omitted from execution.
[...]

Your choice of terminology is making it hard to follow this
proposal. You seem to mean something other than
https://zuul-ci.org/docs/zuul/user/config.html#pipeline when you use
the term "pipeline" (which gets confusing very quickly for anyone
familiar with Zuul configuration concepts).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Bug deputy update

2018-05-15 Thread Gary Kotton
Hi,
A few minor bugs opened and one critical one - 
https://bugs.launchpad.net/neutron/+bug/1771293
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Bogdan Dobrelya

Let me clarify the problem I want to solve with pipelines.

It is getting *hard* to develop things and move patches to the Happy End 
(merged):
- Patches wait too long for CI jobs to start. It should be minutes and 
not hours of waiting.
- If a patch fails a job w/o a good reason, the consequent recheck 
operation repeat waiting all over again.


How pipelines may help solve it?
Pipelines only alleviate, not solve the problem of waiting. We only want 
to build pipelines for the main zuul check process, omitting gating and 
RDO CI (for now).


Where are two cases to consider:
- A patch succeeds all checks
- A patch fails a check with dependencies

The latter cases benefit us the most, when pipelines are designed like 
it is proposed here. So that any jobs expected to fail, when a 
dependency fails, will be omitted from execution. This saves HW 
resources and zuul queue places a lot, making it available for other 
patches and allowing those to have CI jobs started faster (less 
waiting!). When we have "recheck storms", like because of some known 
intermittent side issue, that outcome is multiplied by the recheck storm 
um... level, and delivers even better and absolutely amazing results :) 
Zuul queue will not be growing insanely getting overwhelmed by multiple 
clones of the rechecked jobs highly likely deemed to fail, and blocking 
other patches what might have chances to pass checks as non-affected by 
that intermittent issue.


And for the first case, when a patch succeeds, it takes some extended 
time, and that is the price to pay. How much time it takes to finish in 
a pipeline fully depends on implementation.


The effectiveness could only be measured with numbers extracted from 
elastic search data, like average time to wait for a job to start, 
success vs fail execution time percentiles for a job, average amount of 
rechecks, recheck storms history et al. I don't have that data and don't 
know how to get it. Any help with that is very appreciated and could 
really help to move the proposed patches forward or decline it. And we 
could then compare "before" and "after" as well.


I hope that explains the problem scope and the methodology to address that.

On 5/14/18 6:15 PM, Bogdan Dobrelya wrote:

An update for your review please folks


Bogdan Dobrelya  writes:


Hello.
As Zuul documentation [0] explains, the names "check", "gate", and
"post"  may be altered for more advanced pipelines. Is it doable to
introduce, for particular openstack projects, multiple check
stages/steps as check-1, check-2 and so on? And is it possible to make
the consequent steps reusing environments from the previous steps
finished with?

Narrowing down to tripleo CI scope, the problem I'd want we to solve
with this "virtual RFE", and using such multi-staged check pipelines,
is reducing (ideally, de-duplicating) some of the common steps for
existing CI jobs.


What you're describing sounds more like a job graph within a pipeline.
See: 
https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies 


for how to configure a job to run only after another job has completed.
There is also a facility to pass data between such jobs.

... (skipped) ...

Creating a job graph to have one job use the results of the previous job
can make sense in a lot of cases.  It doesn't always save *time*
however.

It's worth noting that in OpenStack's Zuul, we have made an explicit
choice not to have long-running integration jobs depend on shorter pep8
or tox jobs, and that's because we value developer time more than CPU
time.  We would rather run all of the tests and return all of the
results so a developer can fix all of the errors as quickly as possible,
rather than forcing an iterative workflow where they have to fix all the
whitespace issues before the CI system will tell them which actual tests
broke.

-Jim


I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for 
undercloud deployments vs upgrades testing (and some more). Given that 
those undercloud jobs have not so high fail rates though, I think 
Emilien is right in his comments and those would buy us nothing.


 From the other side, what do you think folks of making the
tripleo-ci-centos-7-3nodes-multinode depend on 
tripleo-ci-centos-7-containers-multinode [2]? The former seems quite 
faily and long running, and is non-voting. It deploys (see featuresets 
configs [3]*) a 3 nodes in HA fashion. And it seems almost never 
passing, when the containers-multinode fails - see the CI stats page 
[4]. I've found only a 2 cases there for the otherwise situation, when 
containers-multinode fails, but 3nodes-multinode passes. So cutting off 
those future failures via the dependency added, *would* buy us something 
and allow other jobs to wait less to commence, by a reasonable price of 
somewhat extended time of the main zuul pipeline. I think it makes sense 
and that extended CI time will not overhead the RDO CI execution times 
so much to become a problem. WDYT?



Re: [openstack-dev] psycopg2 wheel packaging issues

2018-05-15 Thread Doug Hellmann
Excerpts from Stephen Finucane's message of 2018-05-15 11:44:11 +0100:
> I imagine most people have been seeing warnings like the one below
> raised by various openstack packages recently:
> 
>   .tox/py27/lib/python2.7/site-packages/psycopg2/__init__.py:144: 
> UserWarning: The psycopg2
>   wheel package will be renamed from release 2.8; in order to keep installing 
> from binary
>   please use "pip install psycopg2-binary" instead. For details see:
>   .
> 
> Based on this warning, I had done what seemed to be the obvious thing
> to do and proposed adding psycopg2-binary to the list of global
> requirements [1]. This would allow us to replace all references to
> psycopg2 with psycopg2-wheel in individual projects. However, upon
> further investigation it seems this is not really an option since the
> two packages exist in the same namespace and will clobber each other.
> I've now abandoned this patch.
> 
> Does anyone with stronger Python packaging-fu than I have a better
> solution for the psycopg2 folks? There's a detailed description of why
> this was necessary on GitHub [2] along with some potential resolutions,
> none of which seem to be acceptable. If nothing better is possible, it
> seems we'll simply have to live with (or silence) these warnings in
> psycopg2 2.7.x and start installing libpg again once 2.8 is released.
> 
> Cheers,
> Stephen
> 
> [1] https://review.openstack.org/#/c/561924/
> [2] https://github.com/psycopg/psycopg2/issues/674
> 

Bundling an SSL library seems like a particularly bad situation, but if
its ABI isn't stable it may be all they can do.

Perhaps some of the folks in the community who actually use Postgresql
can get involved with helping the upstream maintainers of psycopg and
libpg sort things out.

In the mean time, is there any reason we can't just continue to
install psycopg2 from source in our gate jobs after 2.8? If the
wheel packages for psycopg2 2.7.x are bad perhaps we can come up
with a way to pass --no-binary when installing it, but it's not
clear if we need to. Does the bug affect us?

Doug

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] etherpads for Vitrage forum sessions

2018-05-15 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi,

I created etherpads for Vitrage forum sessions:


- Advanced RCA use cases - taking Vitrage to the next level: 
https://etherpad.openstack.org/p/YVR-vitrage-advanced-use-cases 
- Vitrage RCA over K8s. Pets and Cattle - Monitor each cow? : 
https://etherpad.openstack.org/p/YVR-vitrage-rca-over-k8s 

You are welcome to comment and propose more topics for discussion.

Thanks, 
Ifat





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] psycopg2 wheel packaging issues

2018-05-15 Thread Stephen Finucane
I imagine most people have been seeing warnings like the one below
raised by various openstack packages recently:

  .tox/py27/lib/python2.7/site-packages/psycopg2/__init__.py:144: UserWarning: 
The psycopg2
  wheel package will be renamed from release 2.8; in order to keep installing 
from binary
  please use "pip install psycopg2-binary" instead. For details see:
  .

Based on this warning, I had done what seemed to be the obvious thing
to do and proposed adding psycopg2-binary to the list of global
requirements [1]. This would allow us to replace all references to
psycopg2 with psycopg2-wheel in individual projects. However, upon
further investigation it seems this is not really an option since the
two packages exist in the same namespace and will clobber each other.
I've now abandoned this patch.

Does anyone with stronger Python packaging-fu than I have a better
solution for the psycopg2 folks? There's a detailed description of why
this was necessary on GitHub [2] along with some potential resolutions,
none of which seem to be acceptable. If nothing better is possible, it
seems we'll simply have to live with (or silence) these warnings in
psycopg2 2.7.x and start installing libpg again once 2.8 is released.

Cheers,
Stephen

[1] https://review.openstack.org/#/c/561924/
[2] https://github.com/psycopg/psycopg2/issues/674

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Bogdan Dobrelya

Added a few more patches [0], [1] by the discussion results. PTAL folks.
Wrt remaining in the topic, I'd propose to give it a try and revert it, 
if it proved to be worse than better.

Thank you for feedback!

The next step could be reusing artifacts, like DLRN repos and containers 
built for patches and hosted undercloud, in the consequent pipelined 
jobs. But I'm not sure how to even approach that.


[0] https://review.openstack.org/#/c/568536/
[1] https://review.openstack.org/#/c/568543/

On 5/15/18 10:54 AM, Bogdan Dobrelya wrote:

On 5/14/18 10:06 PM, Alex Schultz wrote:
On Mon, May 14, 2018 at 10:15 AM, Bogdan Dobrelya 
 wrote:

An update for your review please folks


Bogdan Dobrelya  writes:


Hello.
As Zuul documentation [0] explains, the names "check", "gate", and
"post"  may be altered for more advanced pipelines. Is it doable to
introduce, for particular openstack projects, multiple check
stages/steps as check-1, check-2 and so on? And is it possible to make
the consequent steps reusing environments from the previous steps
finished with?

Narrowing down to tripleo CI scope, the problem I'd want we to solve
with this "virtual RFE", and using such multi-staged check pipelines,
is reducing (ideally, de-duplicating) some of the common steps for
existing CI jobs.



What you're describing sounds more like a job graph within a pipeline.
See:
https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies 


for how to configure a job to run only after another job has completed.
There is also a facility to pass data between such jobs.

... (skipped) ...

Creating a job graph to have one job use the results of the previous 
job

can make sense in a lot of cases.  It doesn't always save *time*
however.

It's worth noting that in OpenStack's Zuul, we have made an explicit
choice not to have long-running integration jobs depend on shorter pep8
or tox jobs, and that's because we value developer time more than CPU
time.  We would rather run all of the tests and return all of the
results so a developer can fix all of the errors as quickly as 
possible,
rather than forcing an iterative workflow where they have to fix all 
the
whitespace issues before the CI system will tell them which actual 
tests

broke.

-Jim



I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for
undercloud deployments vs upgrades testing (and some more). Given 
that those
undercloud jobs have not so high fail rates though, I think Emilien 
is right

in his comments and those would buy us nothing.

 From the other side, what do you think folks of making the
tripleo-ci-centos-7-3nodes-multinode depend on
tripleo-ci-centos-7-containers-multinode [2]? The former seems quite 
faily

and long running, and is non-voting. It deploys (see featuresets configs
[3]*) a 3 nodes in HA fashion. And it seems almost never passing, 
when the
containers-multinode fails - see the CI stats page [4]. I've found 
only a 2
cases there for the otherwise situation, when containers-multinode 
fails,
but 3nodes-multinode passes. So cutting off those future failures via 
the
dependency added, *would* buy us something and allow other jobs to 
wait less

to commence, by a reasonable price of somewhat extended time of the main
zuul pipeline. I think it makes sense and that extended CI time will not
overhead the RDO CI execution times so much to become a problem. WDYT?



I'm not sure it makes sense to add a dependency on other deployment
tests. It's going to add additional time to the CI run because the
upgrade won't start until well over an hour after the rest of the


The things are not so simple. There is also a significant 
time-to-wait-in-queue jobs start delay. And it takes probably even 
longer than the time to execute jobs. And that delay is a function of 
available HW resources and zuul queue length. And the proposed change 
affects those parameters as well, assuming jobs with failed dependencies 
won't run at all. So we could expect longer execution times compensated 
with shorter wait times! I'm not sure how to estimate that tho. You 
folks have all numbers and knowledge, let's use that please.



jobs.  The only thing I could think of where this makes more sense is
to delay the deployment tests until the pep8/unit tests pass.  e.g.
let's not burn resources when the code is bad. There might be
arguments about lack of information from a deployment when developing
things but I would argue that the patch should be vetted properly
first in a local environment before taking CI resources.


I support this idea as well, though I'm sceptical about having that 
blessed in the end :) I'll add a patch though.




Thanks,
-Alex


[0] https://review.openstack.org/#/c/568275/
[1] https://review.openstack.org/#/c/568278/
[2] https://review.openstack.org/#/c/568326/
[3]
https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html 


[4] http://tripleo.org/cistatus.html

* ignore the column 1, it's 

Re: [openstack-dev] [horizon] Scheduling switch to django >= 2.0

2018-05-15 Thread Thomas Goirand
On 05/14/2018 03:30 PM, Akihiro Motoki wrote:
> Is Python 3 ever used for mod_wsgi? Does the WSGI setup code honor
> the variable that tells devstack to use Python 3?
> 
> 
> Ubuntu 16.04 provides py2 and py3 versions of mod_wsgi (libapache2-mod-wsgi
> and libapache2-mod-wsgi-py3) and as a quick look the only difference is
> a module
> specified in LoadModule apache directive.
> I haven't tested it yet, but it seems worth explored.
> 
> Akihiro

libapache2-mod-wsgi-py3 is what's in use in all Debian packages for
OpenStack, and it works well, including for Horizon.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Bogdan Dobrelya

On 5/14/18 10:06 PM, Alex Schultz wrote:

On Mon, May 14, 2018 at 10:15 AM, Bogdan Dobrelya  wrote:

An update for your review please folks


Bogdan Dobrelya  writes:


Hello.
As Zuul documentation [0] explains, the names "check", "gate", and
"post"  may be altered for more advanced pipelines. Is it doable to
introduce, for particular openstack projects, multiple check
stages/steps as check-1, check-2 and so on? And is it possible to make
the consequent steps reusing environments from the previous steps
finished with?

Narrowing down to tripleo CI scope, the problem I'd want we to solve
with this "virtual RFE", and using such multi-staged check pipelines,
is reducing (ideally, de-duplicating) some of the common steps for
existing CI jobs.



What you're describing sounds more like a job graph within a pipeline.
See:
https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies
for how to configure a job to run only after another job has completed.
There is also a facility to pass data between such jobs.

... (skipped) ...

Creating a job graph to have one job use the results of the previous job
can make sense in a lot of cases.  It doesn't always save *time*
however.

It's worth noting that in OpenStack's Zuul, we have made an explicit
choice not to have long-running integration jobs depend on shorter pep8
or tox jobs, and that's because we value developer time more than CPU
time.  We would rather run all of the tests and return all of the
results so a developer can fix all of the errors as quickly as possible,
rather than forcing an iterative workflow where they have to fix all the
whitespace issues before the CI system will tell them which actual tests
broke.

-Jim



I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for
undercloud deployments vs upgrades testing (and some more). Given that those
undercloud jobs have not so high fail rates though, I think Emilien is right
in his comments and those would buy us nothing.

 From the other side, what do you think folks of making the
tripleo-ci-centos-7-3nodes-multinode depend on
tripleo-ci-centos-7-containers-multinode [2]? The former seems quite faily
and long running, and is non-voting. It deploys (see featuresets configs
[3]*) a 3 nodes in HA fashion. And it seems almost never passing, when the
containers-multinode fails - see the CI stats page [4]. I've found only a 2
cases there for the otherwise situation, when containers-multinode fails,
but 3nodes-multinode passes. So cutting off those future failures via the
dependency added, *would* buy us something and allow other jobs to wait less
to commence, by a reasonable price of somewhat extended time of the main
zuul pipeline. I think it makes sense and that extended CI time will not
overhead the RDO CI execution times so much to become a problem. WDYT?



I'm not sure it makes sense to add a dependency on other deployment
tests. It's going to add additional time to the CI run because the
upgrade won't start until well over an hour after the rest of the


The things are not so simple. There is also a significant 
time-to-wait-in-queue jobs start delay. And it takes probably even 
longer than the time to execute jobs. And that delay is a function of 
available HW resources and zuul queue length. And the proposed change 
affects those parameters as well, assuming jobs with failed dependencies 
won't run at all. So we could expect longer execution times compensated 
with shorter wait times! I'm not sure how to estimate that tho. You 
folks have all numbers and knowledge, let's use that please.



jobs.  The only thing I could think of where this makes more sense is
to delay the deployment tests until the pep8/unit tests pass.  e.g.
let's not burn resources when the code is bad. There might be
arguments about lack of information from a deployment when developing
things but I would argue that the patch should be vetted properly
first in a local environment before taking CI resources.


I support this idea as well, though I'm sceptical about having that 
blessed in the end :) I'll add a patch though.




Thanks,
-Alex


[0] https://review.openstack.org/#/c/568275/
[1] https://review.openstack.org/#/c/568278/
[2] https://review.openstack.org/#/c/568326/
[3]
https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html
[4] http://tripleo.org/cistatus.html

* ignore the column 1, it's obsolete, all CI jobs now using configs download
AFAICT...

--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-15 Thread Bogdan Dobrelya

On 5/14/18 9:15 PM, Sagi Shnaidman wrote:

Hi, Bogdan

I like the idea with undercloud job. Actually if undercloud fails, I'd 
stop all other jobs, because it doens't make sense to run them. Seeing 
the same failure in 10 jobs doesn't add too much. So maybe adding 
undercloud job as dependency for all multinode jobs would be great idea.


I like that idea, I'll add another patch in the topic then.

I think it's worth to check also how long it will delay jobs. Will all 
jobs wait until undercloud job is running? Or they will be aborted when 
undercloud job is failing?


That is is a good question for openstack-infra folks developing zuul :)
But, we could just try it and see how it works, happily zuul v3 allows 
doing that just in the scope of proposed patches! My expectation is all 
jobs delayed (and I mean the main zuul pipeline execution time here) by 
an average time of the undercloud deploy job of ~80 min, which hopefully 
should not be a big deal given that there is a separate RDO CI pipeline 
running in parallel, which normally *highly likely* extends that 
extended time anyway :) And given high chances of additional 'recheck 
rdo' runs we can observe these days for patches on review. I wish we 
could introduce inter-pipeline dependencies (zuul CI <-> RDO CI) for 
those as well...




However I'm very sceptical about multinode containers and scenarios 
jobs, they could fail because of very different reasons, like race 
conditions in product or infra issues. Having skipping some of them will 
lead to more rechecks from devs trying to discover all problems in a 
row, which will delay the development process significantly.


right, I roughly estimated delay for the main zuul pipeline execution 
time for jobs might be a ~2.5h, which is not good. We could live with 
that had it be a ~1h only, like it takes for the undercloud containers 
job dependency example.




Thanks


On Mon, May 14, 2018 at 7:15 PM, Bogdan Dobrelya > wrote:


An update for your review please folks

Bogdan Dobrelya http://redhat.com>> writes:

Hello.
As Zuul documentation [0] explains, the names "check",
"gate", and
"post"  may be altered for more advanced pipelines. Is it
doable to
introduce, for particular openstack projects, multiple check
stages/steps as check-1, check-2 and so on? And is it
possible to make
the consequent steps reusing environments from the previous
steps
finished with?

Narrowing down to tripleo CI scope, the problem I'd want we
to solve
with this "virtual RFE", and using such multi-staged check
pipelines,
is reducing (ideally, de-duplicating) some of the common
steps for
existing CI jobs.


What you're describing sounds more like a job graph within a
pipeline.
See:

https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies


for how to configure a job to run only after another job has
completed.
There is also a facility to pass data between such jobs.

... (skipped) ...

Creating a job graph to have one job use the results of the
previous job
can make sense in a lot of cases.  It doesn't always save *time*
however.

It's worth noting that in OpenStack's Zuul, we have made an explicit
choice not to have long-running integration jobs depend on
shorter pep8
or tox jobs, and that's because we value developer time more
than CPU
time.  We would rather run all of the tests and return all of the
results so a developer can fix all of the errors as quickly as
possible,
rather than forcing an iterative workflow where they have to fix
all the
whitespace issues before the CI system will tell them which
actual tests
broke.

-Jim


I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines
for undercloud deployments vs upgrades testing (and some more).
Given that those undercloud jobs have not so high fail rates though,
I think Emilien is right in his comments and those would buy us nothing.

 From the other side, what do you think folks of making the
tripleo-ci-centos-7-3nodes-multinode depend on
tripleo-ci-centos-7-containers-multinode [2]? The former seems quite
faily and long running, and is non-voting. It deploys (see
featuresets configs [3]*) a 3 nodes in HA fashion. And it seems
almost never passing, when the containers-multinode fails - see the
CI stats page [4]. I've found only a 2 cases there for the otherwise
situation, when containers-multinode fails, but 3nodes-multinode

Re: [openstack-dev] [tc] Technical Committee Update, 14 May

2018-05-15 Thread Thierry Carrez

Doug Hellmann wrote:

We will also hold a retrospective for the TC as a team on Monday
at the Forum.  Please be prepared to discuss things you think are
going well, things you think we need to change, items from our
backlog that you would like to work on, etc. [10]

[10] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21740/tc-retrospective


You mean Thursday, right ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][goals] tracking status of old goals for new projects

2018-05-15 Thread Thierry Carrez

Doug Hellmann wrote:

There is a patch to update the Python 3.5 goal for Kolla [1]. While
I'm glad to see the work happening, the change adds a new deliverable
to an old goal, and it isn’t clear whether we want to use that
approach for tracking goal work indefinitely. I see a few options.

1. We could update the existing document.

2. We could set up stories in storyboard like we are doing for newer
goals.

3. We could do nothing to record the work related to the goal.

I like option 2, because it means we will be consistent with future
tracking data and we end up with fewer changes in the governance repo
(which was the reason for moving to storyboard in the first place).

What do others think?


I don't have a strong opinion, small preference for (2). At the end of 
the cycle, the goal becomes just another story with leftover tasks.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror

2018-05-15 Thread Sergii Golovatiuk
Wesley,

For Ubuntu I suggest to enable 'proposed' repo to catch the problem
before package will be moved to 'updates'.

On Mon, May 14, 2018 at 11:42 PM, Wesley Hayutin  wrote:
>
>
> On Sun, May 13, 2018 at 11:50 PM Tristan Cacqueray 
> wrote:
>>
>> On May 14, 2018 2:44 am, Wesley Hayutin wrote:
>> [snip]
>> > I do think it would be helpful to say have a one week change window
>> > where
>> > folks are given the opportunity to preflight check a new image and the
>> > potential impact on the job workflow the updated image may have.
>> [snip]
>>
>> How about adding a periodic job that setup centos-release-cr in a pre
>> task? This should highlight issues with up-coming updates:
>> https://wiki.centos.org/AdditionalResources/Repositories/CR
>>
>> -Tristan
>
>
> Thanks for the suggestion Tristan, going to propose using this repo at the
> next TripleO mtg.
>
> Thanks
>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Sergii Golovatiuk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] review runway status

2018-05-15 Thread Chen CH Ji
Thanks for the sharing, The z/VM driver spec review marked as END DATE:
2018-05-15
Thanks a couple folks helped a lot on the review and still need more review
activity on the patch sets, can I apply for extend the end date for the run
way? Thanks a lot

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   melanie witt 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   05/15/2018 12:33 AM
Subject:[openstack-dev] [nova] review runway status



Howdy everyone,

This is just a brief status about the blueprints currently occupying
review runways [0] and an ask for the nova-core team to give these
reviews priority for their code review focus.

* Add z/VM driver
https://blueprints.launchpad.net/nova/+spec/add-zvm-driver-rocky

(jichen) [END DATE: 2018-05-15] spec amendment
https://review.openstack.org/562154
 and implementation series starting
at
https://review.openstack.org/523387


* Local disk serial numbers
https://blueprints.launchpad.net/nova/+spec/local-disk-serial-numbers

(mdbooth) [END DATE: 2018-05-16] series starting at
https://review.openstack.org/526346


* PowerVM Driver (esberglu) [END DATE: 2018-05-28]
   * Snapshot
https://blueprints.launchpad.net/nova/+spec/powervm-snapshot:

https://review.openstack.org/#/c/543023/

   * DiskAdapter parent class
https://blueprints.launchpad.net/nova/+spec/powervm-localdisk:

https://review.openstack.org/#/c/549053/

   *Localdisk
https://blueprints.launchpad.net/nova/+spec/powervm-localdisk:

https://review.openstack.org/#/c/549300/


Cheers,
-melanie

[0]
https://etherpad.openstack.org/p/nova-runways-rocky


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Forum] "DPDK/SR-IOV NFV Operational issues and way forward" session etherpad

2018-05-15 Thread Shintaro Mizuno

Hi

I have created an etherpad page for
"DPDK/SR-IOV NFV Operational issues and way forward"
session at the Vancouver Forum [1].

It will take place on Wed 23, 11:50am - 12:30pm
Vancouver Convention Centre West - Level Two - Room 221-222

If you are using/testing DPDK/SR-IOV for NFV workloads and interested in  
discussing their pros/cons and possible next steps for NFV operators and  
developers, please come join the session.

Please also add your comment/topic proposals to the etherpad beforehand.

[1] https://etherpad.openstack.org/p/YVR-dpdk-sriov-way-forward

Any input is highly appreciated.
Regards,
Shintaro
--
Shintaro MIZUNO (水野伸太郎)
NTT Software Innovation Center
TEL: 0422-59-4977
E-mail: mizuno.shint...@lab.ntt.co.jp


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev