Hi,

On 02.03.21 15:14, Allen George wrote:
> Hi -
> 
> Re: the blogpost on the paid subscription for Travis - is it still valid
> today? If so - that's great to hear.
> 
> Re: reinstalling deps from scratch. You're right Mario - that does happen.
> Seems like the guidance is to have the first stage build a temporary docker
> container that has all the deps, and have following stages pull down that
> container and use it for their tasks. Travis doesn't have a docker cache,
> however, so we'd need something like docker hub. Does the ASF have a docker
> hub subscription? Did someone else take a look at doing this in the past?

Yep, this would work, or there could be a dedicated branch that builds the
docker images. I personally use a dedicated branch that I trigger only every
once a month or so. The benefit is that the "current latest" docker images
keep on working until new docker images (from the branch) become available.
So if an upstream remote is temporarily down for a few weeks, it will only
break the dedicated docker build and zero of our own pipelines.

My own mileage for this concept is pretty good, as long as someone will
every now and then actually do update dependencies.

I tried to implement this for Thrift but stopped because I did not know
where to put the images :-(

Cheers,

    Mario



> Thanks,
> Allen
> 
> On Tue, Mar 2, 2021 at 3:38 AM Duru Can Celasun <dcela...@apache.org> wrote:
> 
>> Keep in mind that the ASF has a paid subscription [1] to Travis, so we are
>> not limited to the open source plan.
>>
>> [1] https://blogs.apache.org/infra/entry/apache_gains_additional_travis_ci
>>
>> On Tue, 2 Mar 2021, at 08:28, Mario Emmenlauer wrote:
>>>
>>> Hi,
>>>
>>> On 02.03.21 05:28, Allen George wrote:
>>>> Hi -
>>>>
>>>> Really sorry if I missed the conversation about this, but it seems like
>>>> Travis open source builds are being drastically reduced. I only
>> realized
>>>> this when looking at the ominous warning on the Travis build page for
>>>> Thrift. Counting up the minutes for a single push indicates that we
>> use 500
>>>> minutes per PR (!) This is a serious problem, because as far as I can
>> tell,
>>>
>>> I'm not sure how much this is related, but I'm under the impression that
>>> currently every build is starting from vanilla environment and installing
>>> all dependencies from scratch. This spends significant time on downloads
>>> and installations, and (what's almost worse) it often fails when upstream
>>> dependencies are temporarily unavailable or changed.
>>>
>>> It could be much better to have a persistent environment, for example
>>> preserve the pre-installed docker containers?
>>>
>>> All the best,
>>>
>>>     Mario Emmenlauer
>>>
>>>
>>> --
>>> BioDataAnalysis GmbH, Mario Emmenlauer      Tel. Buero: +49-89-74677203
>>> Balanstr. 43                   mailto: memmenlauer * biodataanalysis.de
>>> D-81669 München                          http://www.biodataanalysis.de/
>>>
>>
> 



Viele Gruesse,

    Mario Emmenlauer


--
BioDataAnalysis GmbH, Mario Emmenlauer      Tel. Buero: +49-89-74677203
Balanstr. 43                   mailto: memmenlauer * biodataanalysis.de
D-81669 München                          http://www.biodataanalysis.de/

Reply via email to