On 19/03/21 11:18, Andrew Jones wrote:
Yikes, that is 41 hours per CI run. I wonder if GitLab's CI minutes are
on slow machines or if we'll hit the same issue with dedicated runners.
It seems like CI optimization will be necessary...
We need to reduce the amount of CI we do, not only because we can't afford
it, but because it's wasteful. I hate to think of all the kWhs spent
testing the exact same code in the exact same way, since everyone runs
everything with a simple 'git push'.
Yes, I thought the same.
IMHO, 'git push' shouldn't trigger
anything. Starting CI should be an explicit step.
It is possible to do that on a project that uses merge requests, for
example like this:
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH
when: never
For us it's a bit more complicated (no merge requests).
Another common feature is failing the pipeline immediately if one of the
jobs fail, but GitLab does not support it
(https://gitlab.com/gitlab-org/gitlab/-/issues/23605).
Also, the default CI
should only trigger tests associated with the code changed. One should
have to explicitly trigger a complete CI when they deem it worthwhile.
This is interesting. We could add a stage that looks for changed files
using "git diff" and sets some variables (e.g. softmmu, user, TCG,
various targets) based on the results. Then you use those to skip some
jobs or some tests, for example skipping check-tcg. See
https://docs.gitlab.com/ee/ci/variables/#inherit-cicd-variables for more
information.
Paolo