A quick follow-up on the recent discussion/investigation: * Using the ZUUL_CHANGES we can automatically discover and build cross- project packages for each depends-on changes.
* The output of the zuul-rpm-build should be a repository including all
cross-project change packaged so that:
** Next jjb builder shall install the repository with a priority to
consume newly built packages, e.g.:
builder:
- zuul-rpm-build
- integration-test-01 (e.g. smoke test)
** The repo could also be published as an artifact so that further zuul
job could also consume the newly built packages, e.g.:
zuul-rpm-build:
- integration-test-02 (e.g. tempest like test)
- integration-test-03 (e.g. upgrade test)
* The zuul-rpm-build should support different projects type:
** Basic -distgit repository
** Gerrit powered -distgit where source and patches are stored in
gerrit.
** Regular project with an associated -distgit repository, where a new
change result in a new package.
Regards,
-Tristan
On 12/13/2016 10:23 AM, Tristan Cacqueray wrote:
> Many thanks Fabien for driving this discussion.
>
> To summarize, it seems like we need a clever and generic
> "zuul-rpm-build" job, which could lead to a project on its own, which
> will be in charge of preparing the source and the spec for a nice
> rpm-based ci workflow.
>
> Moreover we will probably also need a sf-release tool to prepare all the
> specs and promote/freeze/tag/whatever the master repo.
>
>
> On 12/13/2016 10:09 AM, Fabien Boucher wrote:
>> Hi,
>>
>> I wanted to share some points raised during a discussion with Fred:
>>
>> * We should avoid to have auto generated NVR breaking the upgrade logic of
>> RPM
>> * The master RPM repo of the distribution must be a fully functional one and
>> contains
>> the most recent packages of each soft with a correct NVR.
>> * Instead of building packages via Koji in the "check" pipeline we should
>> investigate how to do it via mock at this step in order to satisfy RPM
>> build
>> requirements. (Koji can still be used for the RPM master repo and stable
>> repos)
>> ** This mock environment can be then based on the master RPM repo as
>> we have correct NVR to only rebuild what is really needed during CI jobs.
>> * Each packaged source must have a Source0 tarball publicly retrievable. That
>> mean fetching the Source0 from the SRPM's spec must be available.
>>
> It seems like mock will be much easier to use indeed.
> Then build requirements could be installed in the mock environment by
> the "zuul-rpm-build" job.
>
> Unit-tests can also be run at that stage, as demonstrated in this
> article:
> https://blog.russellbryant.net/2011/10/06/automated-testing-of-matahari-in-a-chroot-using-mock/
>
> And for the pipeline workflow, we could probably:
> * create a repo at the end of the rpm-build job
> * publish it at a known place (e.g. swift/cirepo/change_nr/patchset_nr/)
> * use this repo using a priority for the follow-up integration tests.
>
>> Also I think there is 3 types of package we will need to handle:
>>
>> * External source:
>> The associated -distgit repos and especially the .spec does not need
>> to be modified on the fly by the CI jobs. We usually package an upstream
>> tag. -distgit repo changes can be managed by Zuul to test new source/new
>> packaging integration within the Distribution before the change land in Git
>> and
>> in the master RPM repo.
>>
>> * Internal source (to Gerrit) part of the Distribution:
>> Here the -distgit spec file needs be modified on the fly when a change on the
>> source need to be tested via the CI. The CI job needs to be able to detect
>> a correct NVR for it (and upper) the one that already exist in the master RPM
>> repo.
>>
>> * Fast moving source where there is a need to package (by commit) to
>> detect packaging issues earlier:
>> IMO here it seems better to rely on DLRN to handle this case.
>>
>> Cheers,
>> Fabien
>>
>>
>> Le 10/12/2016 à 00:05, Fabien Boucher a écrit :
>>> Hi, Thinking again about it.
>>>
>>> And trying to think generic enough to have a system that work
>>> for every RPM distribution of packages.
>>>
>>> The base expectation for a RPM distribution is that all packages
>>> can be installed all together in the OS.
>>>
>>> This can be tested by installing all of them and check there is no
>>> error. But every changes proposed on a distgit repo need to
>>> be build and tested (-> installed with the packaged on the system) and if
>>> that
>>> pass we can approve and merge the change.
>>>
>>> Using this mechanic we know that the HEAD of each distgit repo are known
>>> to form a coherent and working distribution of RPM (pass the all together
>>> installation test).
>>>
>>> Then at a given time we want to want to release them in an identified
>>> (by a name/version) RPM distribution, the release. We know all HEAD
>>> works together so "snapshot" all the corresponding RPMs in a specific RPM
>>> release repo.
>>>
>>> In that case the RPM distribution is coherent because all packages install
>>> together (verified by a job) but we can have more complex job like in
>>> SF where we run complex scenario workflows.
>>>
>>> In a typical RPM distribution you have a bunch of distgit Git repositories
>>> and some source repositories (like the installer, the branding, the docs,
>>> ...)
>>> and we will want to test all that stuff together before merging any
>>> patches in the distgit or source repos. and the way to do that with
>>> Zuul is to host all of them in Gerrit and use the zuul-cloner
>>> to clone all of them (all distgit and source repos part of the RPM distro)
>>> in the job workspace.
>>>
>>> Basically you can build the RPM for the repo, install all of them
>>> and return a success of all install w/o issue. But it is smarter to
>>> only built those you never built.
>>>
>>> Having a master RPM repo make sense here "the pot commun" where
>>> previous built packages remain stored with the NVR such as:
>>> <name>-<sf-ver>-<source-(short-sha|tag)>-<distgit-short-sha>-<packaging-ver>.rpm
>>> can help us in the job by walking through all cloned repos which one
>>> has not been built previously. If some have not been built
>>> then build them and have them land in the master repo.
>>>
>>> Usually only the repo mentioned in the ZUUL_CHANGES will need a rebuild.
>>>
>>> Come the problem of dependencies that need to be resolved before we
>>> build each RPM. Dependencies will be between some packages part of the
>>> full RPM distribution we cloned via the zuul-cloner in the Job workspace.
>>> We can then walk through all repo's spec file and replace Require and
>>> BuildRequire
>>> section by the version we have in the workspace.
>>>
>>> Example:
>>>
>>> repo A spec:
>>> Require: B
>>> BuildRequire: B
>>> Source: D
>>>
>>> repo B spec:
>>> Require: C
>>> BuildRequire: C
>>> Source: E (external soft)
>>>
>>> repo C spec:
>>> Require: Nothing
>>> BuildRequire: Nothing
>>>
>>> repo D soft
>>>
>>> Repo A, B, C, D (part of our RPM distro have been cloned in the workspace)
>>> Require and BuildRequire sections can be changed for A, B, C and pin the
>>> real resulting package NVR.
>>>
>>> B has been checkouted with HEAD SHA such as "1234", A with "6767", D with
>>> "5678". The
>>> A spec file will be changed to target B-<sf-ver>-<source-E-tag>-1234-0.rpm
>>> for Build and
>>> BuildRequire and the Source section will need to target the tarball of D
>>> (the tarball
>>> need to be build and expose somewhere (managed by the job) and will we be
>>> name named
>>> A-<sf-ver>-5678-6767-0.rpm.
>>>
>>> This mechanics is applied across all repo cloned by zuul-cloner. So a
>>> script is needed
>>> here but I think it can be really generic.
>>>
>>> At this point we know how to prepare the spec files to build RPM by taking
>>> in account
>>> the Zuul context so then we can build them (via Koji for instance). But
>>> something need
>>> again to be handled. RPM builds need to occur in the right order to
>>> satisfied BuildRequire
>>> dependencies. But again I think we can deal with that in a generic way.
>>>
>>> When all RPM have been build we can do all tests we want \o/.
>>>
>>> And after if the tests succeed only changes mentioned in the ZUUL_CHANGES
>>> land in their
>>> respective Git repo. So we know that all our Distribution Git repos HEAD
>>> pass the CI :)
>>>
>>> When we want to release our distribution the RPM packages have been already
>>> built
>>> we just want to have a job to make the packages available in another RPM
>>> repo than
>>> master, a version named one.
>>> This job will go over all our Git distribution repo and detect HEAD for all
>>> of them
>>> then it compute the packages list that need to land in that new RPM repo,
>>> (maybe
>>> a last functional test may run (we never know)) and make them land. (The
>>> koji
>>> "add-pkg to tag" action if we use Koji).
>>>
>>> Cheers,
>>> Fabien
>>>
>>> Le 09/12/2016 à 11:00, Fabien Boucher a écrit :
>>>> Hi,
>>>>
>>>> I'm thinking about it since yesterday so and I wanted to share my thoughts
>>>> here.
>>>>
>>>> The idea is to have a sf-master target in Koji which is a "pot commun"
>>>> where all
>>>> packages built during the CI will land. We need to have a magic tool that
>>>> know
>>>> how to manage rpm(s) build/runtime dependencies according to the
>>>> ZUUL_CHANGES
>>>> variables passed by Zuul.
>>>>
>>>> Each build of a rpm need to have the Software commit hash (for the ones we
>>>> host and dev
>>>> on Gerrit) and the tag (for the other ones like Gerrit, Storyboard). Also
>>>> I'm
>>>> thinking of a meta package called sf-<version>.rpm that contains all the
>>>> mandatories dependencies.
>>>>
>>>> For example:
>>>> * sf-3.0.0-1.rpm - depends on:
>>>> ** managesf-<hash>-1.rpm
>>>> ** managesf-ansible-<hash>-1.rpm
>>>> ** sf-ansible-<hash>-1.rpm
>>>> ** gerrit-2.X.X-1.rpm
>>>> ** gerrit-ansible-0.1-1.rpm
>>>> ** sf-docs-3.0.0-1.rpm
>>>>
>>>> This meta package is then the entry point to install SF and its mandatory
>>>> dependencies.
>>>> (Maybe it should also include a file with the list of extra component
>>>> (such repoxplorer, ...)
>>>> at the exact versions supported by this release of SF). We can even
>>>> imagine it contains
>>>> our reno notes. In this version of "dreamland" the SF Git repository
>>>> should only
>>>> contains the "(template) ?" .spec of this meta package.
>>>>
>>>> In the CI the meta package .spec file is modified according the build
>>>> context. For
>>>> example managesf is in the ZUUL_CHANGES then this meta package will be
>>>> rebuilt to pin
>>>> the freshly built version of managesf.
>>>> But doing that at this meta package level is not enough. For instance
>>>> pysflib is
>>>> modified then the managesf's build/runtime rpm deps need to changed to pin
>>>> the
>>>> pysflib version.
>>>>
>>>> Here could be the resulting workflow of the CI to test an incoming change
>>>> in the SF CI:
>>>> We bump Gerrit:
>>>> 1 -> A change in the gerrit-distgit repo is proposed
>>>> 2 -> First Gerrit is built on koji (non-scratch) build and land in the
>>>> "pot commun"
>>>> 3 -> The meta package is rebuild and pin the new version of Gerrit
>>>> 4 -> The NVR of the meta package can maybe an epoch or the ZUUL_UUID ?
>>>> 5 -> The SF image is built/or updated (if a previous on exist on the
>>>> slave - Then having the epoch make sense) using the "pot commun" koji
>>>> repo
>>>> in /etc/yum.repo.d.
>>>> 6 -> Test the SF image as usual
>>>> 7 -> If a success (in the gate) the gerrit-distgit proposed changed lands
>>>> in the
>>>> Git repo.
>>>>
>>>> Building a master SF for our test env could be then : only install the
>>>> "pot commun"
>>>> repo in yum.repo.d and yum install sf[-<epoch>]. Note that as the meta
>>>> package sf use the epoch
>>>> as NVR so theoretically you could just yum install sf from the "pot
>>>> commum" but as
>>>> in the flow I described the meta package sf is built for each patchset
>>>> then we won't
>>>> have the guarantie the latest sf-<epoch>.rpm is a working version of SF :(.
>>>>
>>>> Working on a dev env (let's say on managesf again) could be then as follow:
>>>> -> Make you change locally in managesf source repo
>>>> -> Build the SRPM
>>>> -> Send it to koji (in scratch or non scratch) whatever
>>>> -> Fetch the artifacts
>>>> -> and install them in the devel SF
>>>> -> When your are satisfied with your changes -> propose them
>>>> on Gerrit and the CI will re-test your changes as usual.
>>>>
>>>> Additional note : We want to be sure at any time that all master branch
>>>> of each repo[-distgit] that are part of the SF distribution are working
>>>> together (pass the SF tests).
>>>>
>>>> What about the SF release ? Let's say now we want to release sf-3.0.0.
>>>> Then we will tag (in the koji terminology)
>>>> https://fedoraproject.org/wiki/Koji#Package_Organization
>>>> a specific list of built packages. We tag them with the tag name sf-3.0.0.
>>>> We do
>>>> the same when we want to release sf-3.0.1 and so on.
>>>> So each stable release of the SF (Distribution) will have its own RPM repo.
>>>> -> I'm not sure about that and need to be experimented ...
>>>>
>>>> Let's discuss about that and raise questions and issues.
>>>> I propose also to setup a semi-official Koji for our needs and we could
>>>> start using it
>>>> to experiment.
>>>>
>>>> Cheers,
>>>> Fabien
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Softwarefactory-dev mailing list
>>>> [email protected]
>>>> https://www.redhat.com/mailman/listinfo/softwarefactory-dev
>>>>
>>>
>>
>>
>>
>> _______________________________________________
>> Softwarefactory-dev mailing list
>> [email protected]
>> https://www.redhat.com/mailman/listinfo/softwarefactory-dev
>>
>
>
>
>
> _______________________________________________
> Softwarefactory-dev mailing list
> [email protected]
> https://www.redhat.com/mailman/listinfo/softwarefactory-dev
>
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Softwarefactory-dev mailing list [email protected] https://www.redhat.com/mailman/listinfo/softwarefactory-dev
