We have a use case where we want to preserve the Xcom Value across Operator
retries. Is there a way to do so. Currently it seems tat xcom values are reset
on operator restart.
On 2019/06/26 13:27:58, Emmanuel Brard wrote:
> Hey,
>
> That's what is in airflow code, yes.
>
> Cheers,
> E
>
>
+1 (non-binding)
On Thu, Jun 27, 2019 at 8:58 PM Gerardo Curiel wrote:
> +1 (non-binding)
>
> On 27 June 2019 at 2:57:47 pm, Jarek Potiuk (jarek.pot...@polidea.com)
> wrote:
>
> Hello Airflow community,
>
> This email call for a vote on "Labelling scheme" for the docker images we
> are going to
+1 (non-binding)
On 27 June 2019 at 2:57:47 pm, Jarek Potiuk (jarek.pot...@polidea.com)
wrote:
Hello Airflow community,
This email call for a vote on "Labelling scheme" for the docker images we
are going to publish at
https://cloud.docker.com/u/apache/repository/docker/apache/airflow. The
vote w
Yeah. I also have a working version of Cloud build configuration and we can
run the tests on cloud build if we can get some credits from Google. And
the changes from the upcoming CI image will make it much easier to run
tests on any CI provider. Except Kubernetes tests they are pretty much
CI-agnos
I think the combinations that you are proposing are sensible for pre-merge
checks.
I am working on a proposal to offload extra combinations to another CI
provider (Azure DevOps specifically seems like a good candidate), either
pre or post merge. Ideally I'd like to run more combinations pre-merge
Agree that we should be thoughtful about others as well: In the latest push
(few minutes ago) of the upcoming official CI image i implemented the
change we discussed in the Github where we limit the number of combinations
we test:
You can see it yourself:
https://travis-ci.org/apache/airflow/build
We got this message last year:
> Hello, Airflow PPMC.
> While going through the usage statistics for our Travis CI service, I
> have noticed that the Airflow project is using an abnormally large
> amount of resources, 2600 hours per month or the equivalent of having
> almost 4 machines building ai
I think we should really involve infra to increase the slot number or maybe
even somehow allocate slots per project.
The problem is that we cannot control what other apache projects are doing,
so even if we decrease our runtime, it's the other projects that might hold
us in the queue :(
J.
On Thu
I've noticed this at other Apache projects as well, sometimes it takes up
to 7-8 hours. The only thing we can do, is reduce the runtime of the jobs
so we take less slots :-)
Cheers, Fokko
Op wo 26 jun. 2019 om 21:59 schreef Jarek Potiuk :
> Yep. That's what I suggested as the reason in the ticke