+1 for merge blocking hooks. It would be great to have safety knowing that
any commit I revert to will still pass tests (for rebase testing, etc)
On Mon, Jun 11, 2018 at 10:23 PM Alex Tronchin-James 949-412-7220
<(949)%20412-7220> wrote:
> Could we adopt some sort of merge-blocking hook that
Could we adopt some sort of merge-blocking hook that prohibits merge of PRs
failing unit tests? My team has such an approach at work and it reduces the
volume of breakage quite a bit. The only time we experience problems now is
where our unit test coverage is poor, but we improve the coverage
Hi folks,
The master branch has been broken for a couple of days already. But that
hasn't stopped the project from merging pull requests. As time passes by,
it gets hard to identify what change caused the breakage. And of course,
fixing it might cause conflicts with the changes introduced by the
Got it! I don’t think it’s this last case, but I’ll keep my eye open for it
anyway.
Really, thanks again, I appreciate the help! Will let you know what I find if I
feel it may be of some use for you.
Stéphane
> On Jun 11, 2018, at 3:31 PM, Maxime Beauchemin
> wrote:
>
> One more thing is
One more thing is if one of your worker has a missing dependency required
for a specific DAG. For example you read configuration from zookeeper in
the DAG file, but only one worker is missing the Zookeeper client python
lib, but the scheduler has the lib. You can imagine that the scheduler will
Max,
Thank you for the quick response, that is very helpful and great material for
my investigations!
Thanks again,
Stéphane
> On Jun 11, 2018, at 3:11 PM, Maxime Beauchemin
> wrote:
>
> DagBag import timeouts happen when people do more than just "configuration
> as code" in their module
DagBag import timeouts happen when people do more than just "configuration
as code" in their module scope (say doing actual compute in module scope,
which is a no-no). They may also happen if you read things from flimsy
external systems that may introduce delays. Say you read pipeline
Hi there,
We’re using Airflow in our startup and it’s been great in many ways, thanks for
the work you guys are doing!
Unfortunately, we’re hitting a bunch of issues with ops timing out, DAGs
failing for unclear reasons, with no logs or the following error:
We are using AWS ECS to deploy airflow and we rely on it to have some kind of
high availability and scaling workers.
We have defined 3 ECS services : scheduler / webserver / worker.
Scheduler and Webserver are each running on single container.
Worker service can scale to as many number of