Re: out of CI pipeline minutes again

2023-03-23 Thread Paolo Bonzini

On 3/21/23 17:40, Daniel P. Berrangé wrote:

On Mon, Feb 27, 2023 at 12:43:55PM -0500, Stefan Hajnoczi wrote:

Here are IRC logs from a discussion that has taken place about this
topic. Summary:
- QEMU has ~$500/month Azure credits available that could be used for CI
- Burstable VMs like Azure AKS nodes seem like a good strategy in
order to minimize hosting costs and parallelize when necessary to keep
CI duration low
- Paolo is asking for someone from Red Hat to dedicate the time to set
up Azure AKS with GitLab CI


3 weeks later... Any progress on getting Red Hat to assign someone to
setup Azure for our CI ?


Yes!  Camilla Conte has been working on it and documented her progress 
on https://wiki.qemu.org/Testing/CI/KubernetesRunners


Paolo




Re: out of CI pipeline minutes again

2023-03-23 Thread Alex Bennée


Eldon Stegall  writes:

> On Tue, Mar 21, 2023 at 04:40:03PM +, Daniel P. Berrangé wrote:
>> 3 weeks later... Any progress on getting Red Hat to assign someone to
>> setup Azure for our CI ?
>
> I have the physical machine that we have offered to host for CI set up
> with a recent version of fcos.
>
> It isn't yet running a gitlab worker because I don't believe I have
> access to create a gitlab worker token for the QEMU project.

Can you not see it under:

  https://gitlab.com/qemu-project/qemu/-/settings/ci_cd

If not I can share it with you via some other out-of-band means.

> If creating
> such a token is too much hassle, I could simple run the gitlab worker
> against my fork in my gitlab account, and give full access to my repo to
> the QEMU maintainers, so they could push to trigger jobs.
>
> If you want someone to get the gitlab kubernetes operator set up in AKS,
> I ended up getting a CKA cert a few years ago while working on an
> operator. I could probably devote some time to get that going.
>
> If any of this sounds appealing, let me know.
>
> Thanks,
> Eldon


-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



Re: out of CI pipeline minutes again

2023-03-21 Thread Daniel P . Berrangé
On Mon, Feb 27, 2023 at 12:43:55PM -0500, Stefan Hajnoczi wrote:
> Here are IRC logs from a discussion that has taken place about this
> topic. Summary:
> - QEMU has ~$500/month Azure credits available that could be used for CI
> - Burstable VMs like Azure AKS nodes seem like a good strategy in
> order to minimize hosting costs and parallelize when necessary to keep
> CI duration low
> - Paolo is asking for someone from Red Hat to dedicate the time to set
> up Azure AKS with GitLab CI

3 weeks later... Any progress on getting Red Hat to assign someone to
setup Azure for our CI ?

With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: out of CI pipeline minutes again

2023-03-01 Thread Alex Bennée


Eldon Stegall  writes:

> On Mon, Feb 27, 2023 at 12:43:55PM -0500, Stefan Hajnoczi wrote:
>> - Personally, I don't think this should exclude other efforts like
>> Eldon's. We can always add more private runners!
>
> Hi!
> Thanks so much to Alex, Thomas, Gerd, et al for the pointers.
>
> Although the month has passed and presumably gitlab credits have
> replenished, I am interested in continuing my efforts to replicate the
> shared runner capabilities. After some tinkering I was able to utilise
> Gerd's stateless runner strategy with a few changes, and had a number of
> tests pass in a pipeline on my repo:
>
> https://gitlab.com/eldondev/qemu/-/pipelines/791573670

Looking good. Eyeballing the run times they seem to be faster as well. I
assume the runner is less loaded than the shared gitlab ones?

> Looking at the failures, it seems that some may already be addressed in
> patchsets, and some may be attributable to things like open file handle
> count, which would be useful to configure directly on the d-in-d
> runners, so I will investigate those after integrating the changes from
> the past couple of days.
>
> I have been reading through Alex's patchsets to lower CI time in the
> hopes that I might be able to contribute something there from my
> learnings on these pipelines. If there is an intent to switch to the
> kubernetes gitlab executor, I have worked with kubernetes a number of
> times in the past, and I can trial that as well.

I've dropped that patch for now but I might revisit once the current
testing/next is done.

> Even with the possibility of turning on Azure and avoiding these monthly
> crunches, maybe I can provide some help improving the turnaround time of
> some of the jobs themselves, once I polish off greening the remaining
> failures on my fork.
>
> Forgive me if I knock around a bit here while I figure out how to be
> useful.

No problem, thanks for taking the time to look into it.

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



Re: out of CI pipeline minutes again

2023-02-28 Thread Eldon Stegall
On Mon, Feb 27, 2023 at 12:43:55PM -0500, Stefan Hajnoczi wrote:
> - Personally, I don't think this should exclude other efforts like
> Eldon's. We can always add more private runners!

Hi!
Thanks so much to Alex, Thomas, Gerd, et al for the pointers.

Although the month has passed and presumably gitlab credits have
replenished, I am interested in continuing my efforts to replicate the
shared runner capabilities. After some tinkering I was able to utilise
Gerd's stateless runner strategy with a few changes, and had a number of
tests pass in a pipeline on my repo:

https://gitlab.com/eldondev/qemu/-/pipelines/791573670

Looking at the failures, it seems that some may already be addressed in
patchsets, and some may be attributable to things like open file handle
count, which would be useful to configure directly on the d-in-d
runners, so I will investigate those after integrating the changes from
the past couple of days.

I have been reading through Alex's patchsets to lower CI time in the
hopes that I might be able to contribute something there from my
learnings on these pipelines. If there is an intent to switch to the
kubernetes gitlab executor, I have worked with kubernetes a number of
times in the past, and I can trial that as well.

Even with the possibility of turning on Azure and avoiding these monthly
crunches, maybe I can provide some help improving the turnaround time of
some of the jobs themselves, once I polish off greening the remaining
failures on my fork.

Forgive me if I knock around a bit here while I figure out how to be
useful.

Best,
Eldon



Re: out of CI pipeline minutes again

2023-02-27 Thread Stefan Hajnoczi
Here are IRC logs from a discussion that has taken place about this
topic. Summary:
- QEMU has ~$500/month Azure credits available that could be used for CI
- Burstable VMs like Azure AKS nodes seem like a good strategy in
order to minimize hosting costs and parallelize when necessary to keep
CI duration low
- Paolo is asking for someone from Red Hat to dedicate the time to set
up Azure AKS with GitLab CI
- Personally, I don't think this should exclude other efforts like
Eldon's. We can always add more private runners!

11:31 < pm215> does anybody who understands how the CI stuff works
have the time to take on the task of getting this all to work with
either (a) a custom runner running on one of the hosts we've been
offered or (b) whatever it needs to run using our donated azure
credits ?
11:34 < danpb> i would love to, but i can't volunteer my time right now :-(
11:34 < stefanha> Here is the email thread for anyone else who's
trying to catch up (like me):
https://lore.kernel.org/qemu-devel/cafeaca83u_enxdj3gjka-xv6eljgjpr_9frdkaqm3qacyhr...@mail.gmail.com/
11:34 -!- iggy [~iggy@47.152.10.131] has quit [Quit: WeeChat 3.5]
11:35 -!- peterx_ is now known as peterx
11:35 < danpb> what paolo suggested about using the Kubernetes runners
for Azure seems like the ideal  approach
11:35 -!- peterx
[~x...@bras-base-aurron9127w-grc-56-70-30-145-63.dsl.bell.ca] has quit
[Quit: WeeChat 3.6]
11:35 -!- peterx
[~x...@bras-base-aurron9127w-grc-56-70-30-145-63.dsl.bell.ca] has joined
#qemu
11:36 < danpb> as it would be most cost effective in terms of Azure
resources consumed, and would scale well as it would support as many
parallel runners as we can afford from our Azure allowance
11:36 < danpb> but its also probably ythe more complex option to setup
11:36 -!- dramforever [~dramforev@59.66.131.33] has quit [Ping
timeout: 480 seconds]
11:37 < stefanha> It's a little sad to see the people who are
volunteering to help being ignored in the email thread.
11:37 < stefanha> Ben Dooks asked about donating minutes (the easiest solution).
11:38 -!- sgarzare [~sgarz...@c-115-213.cust-q.wadsl.it] has quit
[Remote host closed the connection]
11:38 < peterx> jsnow[m]: an AHCI question: where normally does the
address in ahci_map_fis_address() (aka, AHCIPortRegs.is_addr[_hi])
reside?  Should that be part of guest RAM that the guest AHCI driver
maps?
11:38 < stefanha> eldondev is in the process of setting up a runner.
11:38 < pm215> stefanha: the problem is there is no one person who has
all of (a) the authority to do stuff (b) the knowledge of what the
right thing to do is (c) the time to do it...
11:38 < danpb> stefanha: the challenge is how to accept the donation
in a sustainable way
11:39 < th_huth> stefanha: Do you know whether we still got that
fosshost server around? ... I know, fosshost is going away, but maybe
we could use it as a temporary solution at least
11:39 < stefanha> th_huth: fosshost, the organization, has ceased to operate.
11:39 < danpb> fosshost ceased operation as a concept
11:39 < davidgiluk> I wonder if there's a way to make it so that those
of us with hardware can avoid eating into the central CI count
11:39 < danpb> if we still have any access to a machine its just by luck
11:39 < stefanha> th_huth: QEMU has 2 smallish x86 VMs ready to at
Oregon State University Open Source Lab.
11:40 < stefanha> (2 vCPUs and 4 GB RAM, probably not enough for
private runners)
11:40 -!- amorenoz [~amorenoz@139.47.72.25] has quit [Read error:
Connection reset by peer]
11:41 < peterx> jsnow[m]: the context is that someone is optimizing
migration by postponing all memory updates to after some point, and
the AHCI post_load is not happy because it cannot find/map the FIS
address here due to delayed memory commit()
(https://pastebin.com/kADnTKzp).  However when I look at that I failed
to see why if it's in normal RAM (but I think I had somewhere wrong)
11:41 < danpb> stefanha:   fyi  gitlab's shared runners are   1 vCPU,
 3.75 GB of RAM by default
11:41 < stefanha> Gerd Hoffmann seems to have a hands-off/stateless
private runner setup that can be scaled to multiple machines.
11:41 < stefanha> danpb: :)
11:41 < danpb> stefanha: so those two small VMs are equivalent to 2
runners, and with 30-40 jobs we run in parallel
11:41 < stefanha> The first thing that needs to be decided is which
approach to take:
11:41 < jsnow[m]> peterx: I'm rusty but I think so. the guest writes
an FIS (Frame Information Structure)  for the card to read and operate
on
11:41 < danpb> stefanha:  just having two runners is going to make our
pipelines take x20 longer to finish
11:41 < stefanha> 1. Donating more minutes to GitLab CI
11:42 < stefanha> 2. Using Azure to spawn runners
11:42 < stsquad> stefanha I think the main bottleneck is commissioning
and admin'ing machines - but we have ansible playbooks to do it for
our other custom runners so it should "just" be a case of writing one
for an x86 runner
11:42 < stefanha> 3. Hosting runners on servers
11:42 < danpb> 

Re: out of CI pipeline minutes again

2023-02-27 Thread Daniel P . Berrangé
On Thu, Feb 23, 2023 at 10:11:11PM +, Eldon Stegall wrote:
> On Thu, Feb 23, 2023 at 03:33:00PM +, Daniel P. Berrangé wrote:
> > IIUC, we already have available compute resources from a couple of
> > sources we could put into service. The main issue is someone to
> > actually configure them to act as runners *and* maintain their
> > operation indefinitely going forward. The sysadmin problem is
> > what made/makes gitlab's shared runners so incredibly appealing.
> 
> Hello,
> 
> I would like to do this, but the path to contribute in this way isn't clear to
> me at this moment. I made it as far as creating a GitLab fork of QEMU, and 
> then
> attempting to create a merge request from my branch in order to test the 
> GitLab
> runner I have provisioned. Not having previously tried to contribute via
> GitLab, I was a bit stymied that it is not even possibly to create a merge
> request unless I am a member of the project? I clicked a button to request
> access.  
> 
> Alex's plan from last month sounds feasible:
>  
>  - provisioning scripts in scripts/ci/setup (if existing not already 
>  good enough) 
>  - tweak to handle multiple runner instances (or more -j on the build) 
>  - changes to .gitlab-ci.d/ so we can use those machines while keeping 
>  ability to run on shared runners for those outside the project 
> 
> Daniel, you pointed out the importance of reproducibility, and thus the
> use of the two-step process, build-docker, and then test-in-docker, so it
> seems that only docker and the gitlab agent would be strong requirements for
> running the jobs?

Almost our entire CI setup is built around use of docker and I don't
believe we really want to change that. Even ignoring GitLab, pretty
much all public CI services support use of docker containers for the
CI environment, so that's a defacto standard.

So while git gitlab runner agent can support many different execution
environments, I don't think we want to consider any except for the
ones that support containers (and that would need docker-in-docker
to be enabled too). Essentially we'll be using GitLab free CI credits
for most of the month. What we need is some extra private CI resource
that can pick up the the slack when we run out of free CI credits each
month. Thus the private CI resource needs to be compatible with the
public shared runners, by providing the same docker based environment[1].

It is a great shame that our current private runners ansible playbooks
were not configuring thue system for use with docker, as that would
have got us 90% of the way there already.



One thing to bear in mind is that a typical QEMU pipeline has 130 jobs
running.

Each gitlab shared runner is 1 vCPU, 3.75 GB of RAM, and we're using
as many as 60-70 of such instances at a time.  A single physical
machine probably won't cope unless it is very big.

To avoid making the overall pipeline wallclock time too long, we need
to be able to handle a large number of parallel jobs at certain times.
We're quite peaky in our usage. Some days we merge nothing and so
consume no CI. Some days we may merge many PRs and so consumes lots
of CI.  So buying lots of VMs to run 24x7 is quite wasteful. A burstab;le
container service is quite appealing


IIUC, GitLab's shared runners use GCP's  "spot" instances which are
cheaper than regular instances. The downside is that the VM can get
killed/descheduled if something higher priority needs Google's
resources. Not too nice for reliabilty, but excellant for cost saving.

With regards,
Daniel

[1] there are still several ways to achieve this.  A bare metal machine
with a local install of docker, or podman, vs pointing to a public
k8s instance that can run containers, and possibly other options
too.
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: out of CI pipeline minutes again

2023-02-24 Thread Alex Bennée


Eldon Stegall  writes:

> On Thu, Feb 23, 2023 at 03:33:00PM +, Daniel P. Berrangé wrote:
>> IIUC, we already have available compute resources from a couple of
>> sources we could put into service. The main issue is someone to
>> actually configure them to act as runners *and* maintain their
>> operation indefinitely going forward. The sysadmin problem is
>> what made/makes gitlab's shared runners so incredibly appealing.
>
> Hello,
>
> I would like to do this, but the path to contribute in this way isn't clear to
> me at this moment. I made it as far as creating a GitLab fork of QEMU, and 
> then
> attempting to create a merge request from my branch in order to test the 
> GitLab
> runner I have provisioned. Not having previously tried to contribute via
> GitLab, I was a bit stymied that it is not even possibly to create a merge
> request unless I am a member of the project? I clicked a button to request
> access.

We don't process merge requests and shouldn't need them to run CI. By
default a pushed branch won't trigger testing so we document a way to
tweak your GIT config to set the QEMU_CI environment so:

  git push-ci-now -f gitlab

will trigger the testing. See:

  https://qemu.readthedocs.io/en/latest/devel/ci.html#custom-ci-cd-variables

>
> Alex's plan from last month sounds feasible:
>  
>  - provisioning scripts in scripts/ci/setup (if existing not already 
>  good enough) 
>  - tweak to handle multiple runner instances (or more -j on the build) 
>  - changes to .gitlab-ci.d/ so we can use those machines while keeping 
>  ability to run on shared runners for those outside the project 
>
> Daniel, you pointed out the importance of reproducibility, and thus the
> use of the two-step process, build-docker, and then test-in-docker, so it
> seems that only docker and the gitlab agent would be strong requirements for
> running the jobs?

Yeah the current provisioning scripts install packages to the host. We'd
like to avoid that and use the runner inside our docker images rather
than polluting the host with setup. Although in practice some hosts pull
double duty and developers want to be able to replicate the setup when
chasing CI errors so will likely install the packages anyway.

>
> I feel like the greatest win for this would be to at least host the
> cirrus-run jobs on a dedicated runner because the machine seems to
> simply be burning double minutes until the cirrus job is complete, so I
> would expect the GitLab runner requirements for those jobs to be low?
>
> If there are some other steps that I should take to contribute in this
> capacity, please let me know.
>
> Maybe I could send a patch to tag cirrus jobs in the same way that the
> s390x jobs are currently tagged, so that we could run those separately?
>
> Thanks,
> Eldon


-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



Re: out of CI pipeline minutes again

2023-02-24 Thread Paolo Bonzini

On 2/23/23 16:33, Daniel P. Berrangé wrote:

On Thu, Feb 23, 2023 at 03:28:37PM +, Ben Dooks wrote:

On Thu, Feb 23, 2023 at 12:56:56PM +, Peter Maydell wrote:

Hi; the project is out of gitlab CI pipeline minutes again.
In the absence of any other proposals, no more pull request
merges will happen til 1st March...


Is there a way of sponsoring more minutes, could people provide
runner resources to help?


IIUC, we already have available compute resources from a couple of
sources we could put into service. The main issue is someone to
actually configure them to act as runners *and* maintain their
operation indefinitely going forward. The sysadmin problem is
what made/makes gitlab's shared runners so incredibly appealing.


Indeed, that's the main issue.  Now that GNOME is hosting 
download.qemu.org, we have much more freedom about how to use the 
credits that we get from the Azure open source sponsorship program. 
Currently we only have 2 VMs running but we could even reduce that to 
just one.


Using the Kubernetes executor for GitLab would be both cheap and 
convenient because we would only pay (use sponsorship credits) when the 
CI is in progress.  Using beefy containers (e.g. 20*16 vCPUs) is 
therefore not out of question.  Unfortunately, this is not an easy thing 
to set up especially for people without much k8s experience.


Paolo




Re: out of CI pipeline minutes again

2023-02-24 Thread Gerd Hoffmann
On Thu, Feb 23, 2023 at 10:11:11PM +, Eldon Stegall wrote:
> On Thu, Feb 23, 2023 at 03:33:00PM +, Daniel P. Berrangé wrote:
> > IIUC, we already have available compute resources from a couple of
> > sources we could put into service. The main issue is someone to
> > actually configure them to act as runners *and* maintain their
> > operation indefinitely going forward. The sysadmin problem is
> > what made/makes gitlab's shared runners so incredibly appealing.

I have a gitlab runner active on a hosted machine.  It builds on fedora
coreos and doesn't need much baby-sitting.  Just copied the bits into a
new repository and pushed to
https://gitlab.com/kraxel/coreos-gitlab-runner

> Daniel, you pointed out the importance of reproducibility, and thus the
> use of the two-step process, build-docker, and then test-in-docker, so it
> seems that only docker and the gitlab agent would be strong requirements for
> running the jobs?

The above works just fine as replacement for the shared runners.  Can
also run in parallel to the shared runners, but it's slower on picking
up jobs, so it'll effectively take over when you ran out of minutes.

take care,
  Gerd




Re: out of CI pipeline minutes again

2023-02-23 Thread Eldon Stegall
On Thu, Feb 23, 2023 at 03:33:00PM +, Daniel P. Berrangé wrote:
> IIUC, we already have available compute resources from a couple of
> sources we could put into service. The main issue is someone to
> actually configure them to act as runners *and* maintain their
> operation indefinitely going forward. The sysadmin problem is
> what made/makes gitlab's shared runners so incredibly appealing.

Hello,

I would like to do this, but the path to contribute in this way isn't clear to
me at this moment. I made it as far as creating a GitLab fork of QEMU, and then
attempting to create a merge request from my branch in order to test the GitLab
runner I have provisioned. Not having previously tried to contribute via
GitLab, I was a bit stymied that it is not even possibly to create a merge
request unless I am a member of the project? I clicked a button to request
access.  

Alex's plan from last month sounds feasible:
 
 - provisioning scripts in scripts/ci/setup (if existing not already 
 good enough) 
 - tweak to handle multiple runner instances (or more -j on the build) 
 - changes to .gitlab-ci.d/ so we can use those machines while keeping 
 ability to run on shared runners for those outside the project 

Daniel, you pointed out the importance of reproducibility, and thus the
use of the two-step process, build-docker, and then test-in-docker, so it
seems that only docker and the gitlab agent would be strong requirements for
running the jobs?

I feel like the greatest win for this would be to at least host the
cirrus-run jobs on a dedicated runner because the machine seems to
simply be burning double minutes until the cirrus job is complete, so I
would expect the GitLab runner requirements for those jobs to be low?

If there are some other steps that I should take to contribute in this
capacity, please let me know.

Maybe I could send a patch to tag cirrus jobs in the same way that the
s390x jobs are currently tagged, so that we could run those separately?

Thanks,
Eldon



Re: out of CI pipeline minutes again

2023-02-23 Thread Daniel P . Berrangé
On Thu, Feb 23, 2023 at 03:28:37PM +, Ben Dooks wrote:
> On Thu, Feb 23, 2023 at 12:56:56PM +, Peter Maydell wrote:
> > Hi; the project is out of gitlab CI pipeline minutes again.
> > In the absence of any other proposals, no more pull request
> > merges will happen til 1st March...
> 
> Is there a way of sponsoring more minutes, could people provide
> runner resources to help?

IIUC, we already have available compute resources from a couple of
sources we could put into service. The main issue is someone to
actually configure them to act as runners *and* maintain their
operation indefinitely going forward. The sysadmin problem is
what made/makes gitlab's shared runners so incredibly appealing.

With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: out of CI pipeline minutes again

2023-02-23 Thread Ben Dooks
On Thu, Feb 23, 2023 at 12:56:56PM +, Peter Maydell wrote:
> Hi; the project is out of gitlab CI pipeline minutes again.
> In the absence of any other proposals, no more pull request
> merges will happen til 1st March...

Is there a way of sponsoring more minutes, could people provide
runner resources to help?

-- 
Ben Dooks, b...@fluff.org, http://www.fluff.org/ben/

Large Hadron Colada: A large Pina Colada that makes the universe disappear.




Re: out of CI pipeline minutes again

2023-02-23 Thread Daniel P . Berrangé
On Thu, Feb 23, 2023 at 07:15:47AM -0700, Warner Losh wrote:
> On Thu, Feb 23, 2023, 6:48 AM Thomas Huth  wrote:
> 
> > On 23/02/2023 13.56, Peter Maydell wrote:
> > > Hi; the project is out of gitlab CI pipeline minutes again.
> > > In the absence of any other proposals, no more pull request
> > > merges will happen til 1st March...
> >
> > I'd like to propose again to send a link along with the pull request that
> > shows that the shared runners are all green in the fork of the requester.
> > You'd only need to check the custom runners in that case, which hopefully
> > still work fine without CI minutes?
> >
> > It's definitely more cumbersome, but maybe better than queuing dozens of
> > pull requests right in front of the soft freeze?
> >
> 
> Yea. I'm just getting done with my pull request and it's really
> demotivating to be done early and miss the boat...

Send your pull request anyway, so it is in the queue to be handled.

With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: out of CI pipeline minutes again

2023-02-23 Thread Warner Losh
On Thu, Feb 23, 2023, 6:48 AM Thomas Huth  wrote:

> On 23/02/2023 13.56, Peter Maydell wrote:
> > Hi; the project is out of gitlab CI pipeline minutes again.
> > In the absence of any other proposals, no more pull request
> > merges will happen til 1st March...
>
> I'd like to propose again to send a link along with the pull request that
> shows that the shared runners are all green in the fork of the requester.
> You'd only need to check the custom runners in that case, which hopefully
> still work fine without CI minutes?
>
> It's definitely more cumbersome, but maybe better than queuing dozens of
> pull requests right in front of the soft freeze?
>

Yea. I'm just getting done with my pull request and it's really
demotivating to be done early and miss the boat...

I'm happy to do this because it's what I do anyway before sending a pull...

Warner

  Thomas
>
>
>


Re: out of CI pipeline minutes again

2023-02-23 Thread Daniel P . Berrangé
On Thu, Feb 23, 2023 at 02:46:40PM +0100, Thomas Huth wrote:
> On 23/02/2023 13.56, Peter Maydell wrote:
> > Hi; the project is out of gitlab CI pipeline minutes again.
> > In the absence of any other proposals, no more pull request
> > merges will happen til 1st March...
> 
> I'd like to propose again to send a link along with the pull request that
> shows that the shared runners are all green in the fork of the requester.
> You'd only need to check the custom runners in that case, which hopefully
> still work fine without CI minutes?

The maintainer's fork will almost certainly not be against current
HEAD though. So test results from them will not be equivalent to
the tests that Peter normally does on staging, which reflects the
result of merging current HEAD + the pull request.

Sometimes that won't matter, but especially near freeze when we have
a high volume of pull requests, I think that's an important difference
to reduce risk of regressions.

> It's definitely more cumbersome, but maybe better than queuing dozens of
> pull requests right in front of the soft freeze?

With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: out of CI pipeline minutes again

2023-02-23 Thread Thomas Huth

On 23/02/2023 13.56, Peter Maydell wrote:

Hi; the project is out of gitlab CI pipeline minutes again.
In the absence of any other proposals, no more pull request
merges will happen til 1st March...


I'd like to propose again to send a link along with the pull request that 
shows that the shared runners are all green in the fork of the requester. 
You'd only need to check the custom runners in that case, which hopefully 
still work fine without CI minutes?


It's definitely more cumbersome, but maybe better than queuing dozens of 
pull requests right in front of the soft freeze?


 Thomas