I thought to build "vtest" just once and deliver using artifacts to all
jobs. It will save some electricity, also GitHub sometimes throw 429 when
we download "vtest" in too many parallel ways.
however, it will not speed up, so I postoned that idea (something like that
https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts
)


I'm fine with swapping "vtest" <--> "haproxy" order.

also, I do not think current patch is ugly, it is acceptable for me (if we
agree to save 8 sec). I'm afraid that current patch require some fix,
because GitHub uses cache in exclusive way, i.e.
you need unique cache key per job, current cache key is not job dependent
(but the rest looks fine)

вт, 8 мар. 2022 г. в 20:06, William Lallemand <wlallem...@haproxy.com>:

> Hello,
>
> The attached patch implements a somehow ugly way to cache the VTest
> binary, basically it gets the commit ID by doing a curl of the
> master.patch on the github URL.
>
> It allows to save ~8s per matrix row, which is around 160s in total. I
> know there is a small window where the curl and the git clone won't have
> the same ID but that will be rebuild anyway for the next build, so
> that's fine in my opinion.
>
> We could probably use the same approach to cache quictls or anything
> that uses a git repository.
>
> Also, I'm wondering if we could also cache the build of HAProxy, you
> could think that weird, but in fact it will help relaunch the tests when
> one is failing, without rebuilding the whole thing.
>
> Let me know if we can improve the attached patch, otherwise I'll merge
> it.
>
> Regards,
>
> --
> William Lallemand
>

Reply via email to