We are from Noida but would be open to participate in Bangalore's meetup.
Thanks,
Raman Gupta
On 2019/08/18 04:40:27, Sumit Maheshwari wrote:
> We are in Bangalore. And most of the folks who showed interest are also
> from Bangalore.
>
> On Sun, Aug 18, 2019 at 12:16 AM Kaxil Naik wrote:
>
>
Hi All,
Do we have airflow meetup(s) in India. We are based out of india and are using
Apache Airflow as an orchestration engine to author and manage 1000(s) of
multi-step workflows.
Would be interested in Airflow meetup in India.
Thanks,
Raman Gupta
We have a use case where we want to preserve the Xcom Value across Operator
retries. Is there a way to do so. Currently it seems tat xcom values are reset
on operator restart.
On 2019/06/26 13:27:58, Emmanuel Brard wrote:
> Hey,
>
> That's what is in airflow code, yes.
>
> Cheers,
> E
>
>
Yup, We are using Docker.
Thanks,
Raman gupta
On 2019/06/19 13:25:04, Teresa Martyny wrote:
> Hi Raman, are you using Docker?
> We manage all of that with Docker and Nomad.
>
> On Tue, Jun 18, 2019, 10:54 PM ramandu...@gmail.com
> wrote:
>
> > Thanks David & Teresa,
> > We are thinking of si
Thanks David & Teresa,
We are thinking of similar kind of approach where we would have a dedicated
Test env to test the Airflow upgrades. We have to preserve the metadata so we
will be upgrading the Airflow version using Airflow cmd. Following steps would
be done
-> Setup Mysql Slave to Airflo
Hi All,
Are there any recommended steps/process to upgrade Airflow in production. We
have a airflow setup in prod which is being actively used by different users.
So ideally we would want to have a seamless upgrade but it seems Airflow
upgrade are not backward compatible and might break few thi
Hi All,
Currently DAG tasks are assigned to a pool during DAG creation time so its kind
of static pool assignment which is done at creation time and might lead to
unused resources if there are multiple pools configured in the airflow.
So does it make sense to have support for pool allocation pol
Hi All,
We are observing intermittently that Tasks get stuck in queued state and never
get executed by Airflow.
On debugging it we found that one of the queued dependency was not met due to
which task did not move from queued to running state. So task remained in
queued state.
(are_dependencies
On 2019/05/23 11:05:47, ramandu...@gmail.com wrote:
> Hi All,
> Is there a way to skip the task retries during DAG execution.
> We have a use case where we want to fail the dag run and skip the leftover
> retries. Currently we throw the AirflowFailedException and update the retries
> count i
Hi All,
Is there a way to skip the task retries during DAG execution.
We have a use case where we want to fail the dag run and skip the leftover
retries. Currently we throw the AirflowFailedException and update the retries
count in TI table.
Is there any other better way to achieve the same.
Than
Hey All,
Currently it seems that Airflow scheduler with K8 executor mode is required to
run inside K8 cluster. We are working on a use case where Airflow scheduler
might run outside of K8 cluster. There is a boolean flag "in_cluster" exposed
via airflow.cfg but there does not seem to be a way t
Hi All,
In our workflows we trigger big data jobs which run from few hours to few days.
Currently our Airflow operator submits the job and keeps on polling its status.
Depending upon its status next task in the workflow is triggered by Airflow
scheduler.
So currently operator is not doing any u
12 matches
Mail list logo