Yep pushing single 'nightly' tag works as well as expected (see the
attached image). Can we settle on that solution then?

Just to summarise:

1) Nightly CRON job after it is successful will update nightly tags on the
latest master/v1-10-test for the clarity those are the tags:

   - nightly-master
   - nightly-v1-10-test

2) DockerHub will be configured to update the images on tags matching
`nightly-.*` regexp. It will "refresh" all the images - ci, production,
production-build (we have three of them now) for each supported python
version.

3) No need for any secrets. The temporary Github Token in the CI will let
us move the tag to latest master/v1-10-test . This in turn will trigger
DockerHyub builds.

4) Tagging will only happen if the CRON build is successful - this will
make sure that whenever there is a problem that we detect early, images in
Dockerhub are not affected


Can we agree on that proposal? I would also love to hear what other
committers think about it  :).


J.



On Tue, Apr 21, 2020 at 4:15 PM Jarek Potiuk <[email protected]>
wrote:

> I am going to check the nightly tag now. If the nightly tag works (quite
> likely) - would that still be problem Ash/ Daniel? Do you still think that
> tag-triggered solution is worse in this case than URL-triggered one ?
>
> J.
>
>
> On Tue, Apr 21, 2020 at 4:01 PM Daniel Imberman <[email protected]>
> wrote:
>
>> Yeah, I'm not worried about DDOS as long as the URL is stored in a
>> secret/doesn't show up in the github action UI.
>>
>>
>> On Tue, Apr 21, 2020 at 6:29 AM Ash Berlin-Taylor <[email protected]> wrote:
>>
>> > I'm still not quite sure what problem are we solving here either...?
>> > What is broken with the current/already merged solution?
>> >
>> > From a philosophical view I don't like deleting tags, and this feels
>> > like a bit of a hack to work around limitations in other systems.
>> > (Welcome to being a developer I guess.) What you have proposed is better
>> > than having the tags build up, certainly, but I'm still not wild about
>> > it (And to check: we can't just re-push a single "nightly" tag as Docker
>> > Hub will not rebuild when a tag changes? Have we confirmed this?)
>> >
>> > I've read the discussion you linked to, but the only thing I see is this
>> > comment
>> https://github.com/apache/airflow/pull/8400#issuecomment-614796124
>> >
>> > > But is it safe to store such URL somewhere? Is it something that is
>> > > sustainable long term (who will take care that it is actually still
>> > > working :)) .... Who will watch the watcher. ?
>> >
>> > Yes, if we store that build URL in a secure secret, for instance using
>> > the encryption approach suggested here
>> >
>> >
>> https://help.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets#limits-for-secrets
>> > we can get Apache Infra to add a single secret then we can add/change
>> > values easily in the future.
>> >
>> > There is a lot of precedent in Infra tickets of creating a secret for
>> > Github Actions:
>> >
>> >
>> https://issues.apache.org/jira/browse/INFRA-19602?jql=text%20~%20%22github%20actions%20secrets%22
>> > for example
>> >
>> > In the past I've used https://github.com/voxpupuli/hiera-eyaml even
>> > outside of puppet as it only encrypts the values, not the whole file,
>> > which makes it a bit easier to see what setting is changed, even if the
>> > setting is not visible in the diff. So what I'd suggest is we ask Infra
>> > to create an random GPG key, put the private key in a Secret in Github
>> > and then provide us with the public key. I'm happy to set this up if
>> > it's the route we want to go down.
>> >
>> > If it's a nightly Github action, so we'd see CRON failures as we did
>> > with Travis, no?
>> >
>> > -a
>> >
>> >
>> > On Apr 21 2020, at 12:17 pm, Jarek Potiuk <[email protected]>
>> > wrote:
>> >
>> > > On Tue, Apr 21, 2020 at 12:05 PM Ash Berlin-Taylor <[email protected]>
>> > wrote:
>> > >
>> > >> I've just looked in Docker settings for it's Automated builds, and
>> it is
>> > >> possible to set up a URL that we can post to that will then trigger a
>> > >> daily build.
>> > >>
>> > >>
>> >
>> https://hub.docker.com/repository/registry-1.docker.io/apache/airflow/builds/05570a90-f8bf-4803-b935-f93c455ab5bb
>> > >> was me testing it out (needs auth, most people won't be able to see
>> > that)
>> > >>
>> > >> Yes. I know this option. This problem (regular builds) and possibly
>> > > triggering them via some kind of CRON job was already discussed it in
>> > > detail with Daniel in
>> > > https://github.com/apache/airflow/pull/8400#issuecomment-614783967  -
>> > that
>> > > was PR entitle "Less frequent DockerHub Builds" which we merged
>> already
>> > > (but I am not particularly happy with this approach). Please take a
>> look
>> > > there Ash - we discussed all the options we saw at this time
>> > > (including URL
>> > > triggering).
>> > >
>> > >
>> > >> So we can set up a travis job (say, since we can put encrypted info
>> in
>> > >> there. I don't think we can put secrets in our Github Actions as we
>> > >> aren't admins on the repo) that would make a PSOT to this special URL
>> > >> once a day, causing DockerHub to build for us.
>> > >>
>> > >
>> > > I believe a big problem with external URL that it might be to use to
>> DDOS
>> > > our builds. And we cannot (For now) manage secrets in our Github
>> > > Actions. I
>> > > opened INFRA ticket and Gavin assigned it himself so likely there
>> will be
>> > > soon answered and maybe we will have a proposal from INFRA soon:
>> > > https://issues.apache.org/jira/projects/INFRA/issues/INFRA-20124. If
>> > > we had
>> > > this possibility, URL triggered by CRON Github Action would be a
>> > > possibility. We are waiting for INFRA to help with that. And I think
>> we
>> > > want to move out Travis eventually. And I do not want to add another
>> > "CRON"
>> > > service just for that - it should be available to all committers
>> > > to modify/fix/change and we do not want to add additional
>> > > service/credentials/hidden URL secret mechanism. I think we
>> definitely do
>> > > not want to keep both GA and Travis at the same time. This is quite a
>> bad
>> > > idea to keep Travis running and complicating our toolset.
>> > >
>> > > Would that get us the behaviour we need without polluting our git
>> tags?
>> > >>
>> > >
>> > > I think I have a better solution :) See below.
>> > >
>> > > -ash
>> > >>
>> > >> On Apr 21 2020, at 10:59 am, Ash Berlin-Taylor <[email protected]>
>> wrote:
>> > >>
>> > >> > What is the goal in having daily-master-ci-2020-04-21 etc docker
>> image
>> > >> > tags? When would we want to use anything than "current latest CI
>> > >> > master" image?
>> > >>
>> > >
>> > > Agree. It does clutter the namespace. And some projects are ok with
>> that.
>> > > If we do not think it might be useful we can even implement retention
>> > > policy and keep only 2-3 latest tags (or even just the latest one). I
>> > think
>> > > this might be a very good solution - every night when the master CRON
>> > build
>> > > succeeds we delete previous "daily-master-ci-*" and create a new one
>> with
>> > > today's date. That will give us what we want, it will not clutter the
>> > > namespace and additionally, we will immediately see when the last
>> daily
>> > > build succeeded. The builds in DockerHub can be triggered by regular
>> > > expression for the tags so this will work.
>> > >
>> > > I think in this form it should all your concerns Ash (no clutter, full
>> > > automation) and mine (no extra services to manage) and provides a
>> robust
>> > > solution without.
>> > >
>> > > Why do you think? Ash, any other concerns? Others?
>> > >
>> > > J.
>> > >
>> >
>>
>
>
> --
>
> Jarek Potiuk
> Polidea <https://www.polidea.com/> | Principal Software Engineer
>
> M: +48 660 796 129 <+48660796129>
> [image: Polidea] <https://www.polidea.com/>
>
>

-- 

Jarek Potiuk
Polidea <https://www.polidea.com/> | Principal Software Engineer

M: +48 660 796 129 <+48660796129>
[image: Polidea] <https://www.polidea.com/>

Reply via email to