I’m still somewhat hesitant on this as it could allow regressions to peak 
through, though as long as we’re still doing the daily build, and with our 
soon-to-be-created prerelease load testing I think we should be okay.

On Fri, Oct 18, 2019 at 10:30 AM, Jarek Potiuk <jarek.pot...@polidea.com> wrote:
Seems that our tests got way smarter now :).

I just implemented the "smartness" it with
https://github.com/apache/airflow/pull/6321 and trying to workaround
Kubernetes problem we have :).
The doc changes are short enough that there is no need to optimise those.
Doc-test only should execute much much faster now :).
For details see https://issues.apache.org/jira/browse/AIRFLOW-5649

J.

On Fri, Aug 23, 2019 at 4:09 PM James Meickle
<jmeic...@quantopian.com.invalid> wrote:

> GitHub recently introduced the idea of "Draft" PRs:
> https://github.blog/2019-02-14-introducing-draft-pull-requests/
>
> Could we do something similar either with that system or something else?
> Run a minimal set until it's marked as "ready for testing", and then run a
> larger suite.
>
> On Fri, Aug 23, 2019 at 10:01 AM Kaxil Naik <kaxiln...@gmail.com> wrote:
>
> > Maybe your 4th point covers this but there are frequent Doc only changes.
> > In this case we should not run "real-test" but only static checks - mypy,
> > pylint. flake8, doc generation.
> >
> > On Fri, Aug 23, 2019 at 2:39 PM Jarek Potiuk <jarek.pot...@polidea.com>
> > wrote:
> >
> > > Hello everyone,
> > >
> > > On top of moving out from Travis (I will resume working on it next
> week)
> > I
> > > thought about some ways to improve the feedback cycle time we have with
> > CI
> > > (super long now).
> > >
> > > Maybe we should consider being smarter with running tests only when
> > really
> > > needed.
> > >
> > > After introducing pre-commit framework, I thought that it is rather
> smart
> > > in selecting which tests to run locally based on what files changed.
> It's
> > > not perfect of course and there are edge cases where we change .xml
> files
> > > and python tests stop running (for example), but maybe we can introduce
> > > some smartness in our test execution scripts based on similar
> principles.
> > >
> > > First of all we should always run all tests after master is merged
> > (catches
> > > edge cases missed by partial running in PR). Then in PRs we could run
> > > partial tests:
> > >
> > > 1) Do not run Python tests if none of .py files change
> > >
> > > 2) Do not run Doc generation if neither .py nor .rst files change
> > >
> > > 3) [This might be controversial / not catching a lot of problems] -
> only
> > > run related tests - for corresponding packages - when .py files change.
> > >
> > > 4) Only run "real unit tests" if none of the operators/hooks/sensors
> > change
> > > (we would have to introduce a way to distinguish unit tests - requiring
> > > only basic Airflow + database - from integration tests where
> > > hooks/sensors/operators change). This could be done using the
> > > slimmer/smaller/future production CI image - without the overhead of
> > > starting up all the services.
> > >
> > > 5) Run only static checks + real/related tests in PR for
> > > "drafts/in-progress PRs" but only trigger full tests when someone marks
> > it
> > > as "Ready to merge" (via label or comment in PR).
> > >
> > > Of course if we can make it faster with no-Travis setup running full
> > tests
> > > in all PRs makes more sense, but maybe some of those options above are
> > also
> > > acceptable.
> > >
> > > Comments?
> > >
> > > J.
> > >
> > > --
> > >
> > > Jarek Potiuk
> > > Polidea <https://www.polidea.com/> | Principal Software Engineer
> > >
> > > M: +48 660 796 129 <+48660796129>
> > > [image: Polidea] <https://www.polidea.com/>
> > >
> >
>


--

Jarek Potiuk
Polidea <https://www.polidea.com/> | Principal Software Engineer

M: +48 660 796 129 <+48660796129>
[image: Polidea] <https://www.polidea.com/>

Reply via email to