Giving few updates here:

- we implemented option 1.a), which means that we moved the tripleo CI
scenarios environments and pingtests into tripleo-heat-template.
- we created tripleo-scenarioXXX-puppet jobs that run on some modules.

Some Example:
- puppet-gnocchi now runs tripleo-scenario001, that deploy TripleO
with Telemetry services
- if you submit a patch in puppet-tripleo that touch the gnocchi
profiles, tripleo-scenario001 will also run
- if you submit a patch in THT that touch the gnocchi composable
service, tripleo-scenario001 will also run
- if you add a new service in TripleO during Pike and test it in a
scenario, TripleO CI scenarios for Ocata will continue to work as we
now use THT to store the CI environments and we don't backport
features.

Future:
- investigate if we could run the scenarios outside tripleo CI
(example: run tripleo-scenario001 in Gnocchi upstream CI, beside other
devstack jobs)
- keep increasing coverage of use-cases: more services, ssl, ipv6,
more plugins, etc.
- investigate how we could run multinode scenarios by using tripleo-quickstart.

Any feedback and help on this topic is welcome as usual.
Don't hesitate to contribute, add your own service, or propose
scenario improvements, it's highly welcome!

Thanks,

On Mon, Nov 28, 2016 at 3:35 PM, John Trowbridge <tr...@redhat.com> wrote:
>
>
> On 11/22/2016 09:02 PM, Emilien Macchi wrote:
>> == Context
>>
>> In Newton we added new multinode jobs called "scenarios".
>> The challenged we tried to solve was "how to test the maximum of
>> services without overloading the nodes that run tests".
>>
>> Each scenarios deploys a set of services, which allows us to
>> horizontally scale the number of scenarios to increase the service
>> testing coverage.
>> See the result here:
>> https://github.com/openstack-infra/tripleo-ci#service-testing-matrix
>>
>> To implement this model, we took example of Puppet OpenStack CI:
>> https://github.com/openstack/puppet-openstack-integration#description
>> We even tried to keep consistent the services/scenarios relations, so
>> it's consistent and easier to maintain.
>>
>> Everything was fine until we had to add new services during Ocata cycles.
>> Because tripleo-ci repository is not branched, adding Barbican service
>> in the TripleO environment for scenario002 would break Newton CI jobs.
>> During my vacations, the team created a new scenario, scenario004,
>> that deploys Barbican and that is only run for Ocata jobs.
>> I don't think we should proceed this way, and let me explain why.
>>
>> == Problem
>>
>> How to scale the number of services that we test without increasing
>> the number of scenarios and therefore the complexity of maintaining
>> them on long-term.
>>
>>
>> == Solutions
>>
>> The list is not exhaustive, feel free to add more.
>>
>> 1) Re-use experience from Puppet OpenStack CI and have environments
>> that are in a branched repository.
>> environments.
>> In Puppet OpenStack CI, the repository that deploys environments
>> (puppet-openstack-integration) is branched. So if puppet-barbican is
>> ready to be tested in Ocata, we'll patch
>> puppet-openstack-integration/master to start testing it and it won't
>> break stable jobs.
>> Like this, we were successfully able to maintain a fair number of
>> scenarios and keep our coverage increasing over each cycle.
>>
>> I see 2 sub-options here:
>>
>> a) Move CI environments and pingtest into
>> tripleo-heat-templates/environments/ci/(scenarios|pingtest). This repo
>> is branched and we could add a README to explain these files are used
>> in CI and we don't guarantee they would work outside TripleO CI tools.
>
> I also like this solution the best. It has the added benefit of being
> able to review the CI for a new service in the same patch (or patch
> chain) that adds the new service. We already have the low-memory
> environment in tht, which while not CI specific, is definitely a CI
> requirement.
>
>> b) Branch tripleo-ci repository. Personally I don't like this solution
>> because a lot of patches in this repo are not related to OpenStack
>> versions, which means we would need to backport most of the things
>> from master.
>>
>> 2) Introduce branch-based scenario tests -
>> https://review.openstack.org/#/c/396008/
>> It duplicates a lot of code and it's imho not really effective, though
>> this solution would correctly works.
>>
>> 3) Introduce a new scenario each time we have new services (like we
>> did with scenario004).
>> By adding new scenarios at each release because we test new services
>> is imho the wrong choice because:
>> a) it adds complexity in our we're going to maintain these scenarios.
>> b) it consumes more CI resources that we would need when some patches
>> have to run all scenarios jobs.
>>
>>
>> So I gave my opinion on the solutions, discussion is now open and my
>> hope is that we find a consensus soon, so we can make progress in our
>> testing coverage.
>> Thanks,
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to