On Fri, 24 Apr 2020 10:30:23 +0100
Daniel P. Berrangé <berra...@redhat.com> wrote:

> On Thu, Apr 23, 2020 at 01:36:48PM -0400, Cleber Rosa wrote:
> > 
> > 
> > ----- Original Message -----
> > > From: "Daniel P. Berrangé" <berra...@redhat.com>
> > > To: "Cleber Rosa" <cr...@redhat.com>
> > > Cc: "Peter Maydell" <peter.mayd...@linaro.org>, "Fam Zheng"
> > > <f...@euphon.net>, "Thomas Huth" <th...@redhat.com>, "Beraldo
> > > Leal" <bl...@redhat.com>, "Erik Skultety" <eskul...@redhat.com>,
> > > "Philippe Mathieu-Daudé" <phi...@redhat.com>, "Wainer Moschetta"
> > > <wmosc...@redhat.com>, "Markus Armbruster" <arm...@redhat.com>,
> > > "Wainer dos Santos Moschetta" <waine...@redhat.com>, "QEMU
> > > Developers" <qemu-devel@nongnu.org>, "Willian Rampazzo"
> > > <wramp...@redhat.com>, "Alex Bennée" <alex.ben...@linaro.org>,
> > > "Eduardo Habkost" <ehabk...@redhat.com> Sent: Thursday, April 23,
> > > 2020 1:13:22 PM Subject: Re: [PATCH 0/5] QEMU Gating CI
> > > 
> > > On Thu, Apr 23, 2020 at 01:04:13PM -0400, Cleber Rosa wrote:
> > > > 
> > > > 
> > > > ----- Original Message -----
> > > > > From: "Peter Maydell" <peter.mayd...@linaro.org>
> > > > > To: "Markus Armbruster" <arm...@redhat.com>
> > > > > Cc: "Fam Zheng" <f...@euphon.net>, "Thomas Huth"
> > > > > <th...@redhat.com>, "Beraldo Leal" <bl...@redhat.com>, "Erik
> > > > > Skultety" <eskul...@redhat.com>, "Alex Bennée"
> > > > > <alex.ben...@linaro.org>, "Wainer Moschetta"
> > > > > <wmosc...@redhat.com>, "QEMU Developers"
> > > > > <qemu-devel@nongnu.org>, "Wainer dos Santos Moschetta"
> > > > > <waine...@redhat.com>, "Willian Rampazzo"
> > > > > <wramp...@redhat.com>, "Cleber Rosa" <cr...@redhat.com>,
> > > > > "Philippe Mathieu-Daudé" <phi...@redhat.com>, "Eduardo
> > > > > Habkost" <ehabk...@redhat.com> Sent: Tuesday, April 21, 2020
> > > > > 8:53:49 AM Subject: Re: [PATCH 0/5] QEMU Gating CI
> > > > > 
> > > > > On Thu, 19 Mar 2020 at 16:33, Markus Armbruster
> > > > > <arm...@redhat.com> wrote:
> > > > > > Peter Maydell <peter.mayd...@linaro.org> writes:
> > > > > > > I think we should start by getting the gitlab setup
> > > > > > > working for the basic "x86 configs" first. Then we can
> > > > > > > try adding a runner for s390 (that one's logistically
> > > > > > > easiest because it is a project machine, not one owned by
> > > > > > > me personally or by Linaro) once the basic framework is
> > > > > > > working, and expand from there.
> > > > > >
> > > > > > Makes sense to me.
> > > > > >
> > > > > > Next steps to get this off the ground:
> > > > > >
> > > > > > * Red Hat provides runner(s) for x86 stuff we care about.
> > > > > >
> > > > > > * If that doesn't cover 'basic "x86 configs" in your
> > > > > > judgement, we fill the gaps as described below under
> > > > > > "Expand from there".
> > > > > >
> > > > > > * Add an s390 runner using the project machine you
> > > > > > mentioned.
> > > > > >
> > > > > > * Expand from there: identify the remaining gaps, map them
> > > > > > to people / organizations interested in them, and solicit
> > > > > > contributions from these
> > > > > >   guys.
> > > > > >
> > > > > > A note on contributions: we need both hardware and people.
> > > > > > By people I mean maintainers for the infrastructure, the
> > > > > > tools and all the runners. Cleber & team are willing to
> > > > > > serve for the infrastructure, the tools and
> > > > > > the Red Hat runners.
> > > > > 
> > > > > So, with 5.0 nearly out the door it seems like a good time to
> > > > > check in on this thread again to ask where we are
> > > > > progress-wise with this. My impression is that this patchset
> > > > > provides most of the scripting and config side of the first
> > > > > step, so what we need is for RH to provide an x86 runner
> > > > > machine and tell the gitlab CI it exists. I appreciate that
> > > > > the whole coronavirus and working-from-home situation will
> > > > > have upended everybody's plans, especially when actual
> > > > > hardware might be involved, but how's it going ?
> > > > > 
> > > > 
> > > > Hi Peter,
> > > > 
> > > > You hit the nail in the head here.  We were affected indeed
> > > > with our ability
> > > > to move some machines from one lab to another (across the
> > > > country), but we're
> > > > actively working on it.
> > > 
> > > For x86, do we really need to be using custom runners ?
> > > 
> > 
> > Hi Daniel,
> > 
> > We're already using the shared x86 runners, but with a different
> > goal.  The goal of the "Gating CI" is indeed to expand on non-x86
> > environments.  We're in a "chicken and egg" kind of situation,
> > because we'd like to prove that GitLab CI will allow QEMU to expand
> > to very different runners and jobs, while not really having all
> > that hardware setup and publicly available at this time.
> > 
> > My experiments were really around that point, I mean, confirming
> > that we can grow the number of
> > architectures/runners/jobs/configurations to provide a coverage
> > equal or greater to what Peter already does.
> 
> So IIUC, you're saying that for x86 gating, the intention is to use
> shared runners in general.
> 

No, I've said that whenever possible we could use containers and thus
shared runners.  For instance, testing QEMU running on the x86 CentOS 8
KVM is not something we could do with shared runners. 

> Your current work that you say is blocked on access to x86 hardware,
> is just about demonstrating the concept of plugging in custom
> runners, while we wait for access to non-x86 hardware ?
> 

Short answer is no.  The original scope and goal was to have the same or
very similar jobs that Peter runs himself in his own machines.  So it
was/is not about just x86 hardware, but x86 that can run a couple of
different OSs, and non-x86 hardware too.  We're basically scaling down
and changing the scope (for instance adding the s390 machine here) in
an attempt to get this moving forward.

> > > With GitLab if someone forks the repo to their personal
> > > namespace, they cannot use any custom runners setup by the origin
> > > project. So if we use custom runners for x86, people forking
> > > won't be able to run the GitLab CI jobs.
> > > 
> > 
> > They will continue to be able use the jobs and runners already
> > defined in the .gitlab-ci.yml file.  This work will only affect
> > people pushing to the/a "staging" branch.
> > 
> > > As a sub-system maintainer I wouldn't like this, because I
> > > ideally want to be able to run the same jobs on my staging tree,
> > > that Peter will run at merge time for the PULL request I send.
> > > 
> > 
> > If you're looking for symmetry between any PR and "merge time"
> > jobs, the only solution is to allow any PR to access all the
> > diverse set of non-shared machines we're hoping to have in the near
> > future.  This may be something we'll get to, but I doubt we can
> > tackle it in the near future now.
> 
> It occurred to me that we could do this if we grant selective access
> to the Gitlab repos, to people who are official subsystem maintainers.
> GitLab has a concept of "protected branches", so you can control who
> is allowed to push changes on a per-branch granularity.
> 
> So, for example, in the main qemu.git, we could create branches for
> each subsystem tree eg
> 
>   staging-block
>   staging-qapi
>   staging-crypto
>   staging-migration
>   ....
> 
> and for each of these branches, we can grant access to relevant
> subsystem maintainer(s).
> 
> When they're ready to send a pull request to Peter, they can push
> their tree to this branch. Since the branch is in the main
> gitlab.com/qemu/qemu project namespace, this branch can run CI using
> the private QEMU runners. The subsystem maintainer can thus see the
> full set of CI results across all platforms required by Gating,
> before Peter even gets the pull request.
> 

Sure, this is actually an extrapolation/extension of what we're
proposing to do here with the unique "staging" branch.  I see no issues
at all to have more than one (one per subsystem/maintainer) staging
branches.

> So when Peter then looks at merging the pull request to master, the
> only he's likely to see are the non-deterministic bugs, or issues
> caused by semantic conflicts with other recently merged code.
> 
> It would even be possible to do the final merge into master entirely
> from GitLab, no need to go via email. When the source branch & target
> branch are within the same git repo, GitLab has the ability to run CI
> jobs against the resulting merge commit in a strict gating manner,
> before it hits master. They call this "Merge trains" in their
> documentation.
> 
> IOW, from Peter's POV, merging pull requests could be as simple as
> hitting the merge button in GitLab merge request UI. Everything wrt
> CI would be completely automated, and the subsystem maintainers would
> have the responsibility to dealing with merge conflicts & CI
> failures, which is more scalable for the person co-ordinating the
> merges into master.
> 

This is very much aligned to some previous discussions, I believe, at
the RFC thread.  But for practical purposes, the general direction was
to scale down to the bare minimum to replicate Peter's setup and
workflow, and then move from there to possibly something very similar
to what you're describing here.

> 
> Regards,
> Daniel


Thanks a *whole lot* for the feedback Daniel!
- Cleber.


Reply via email to