We attempted to use zipped dags in the past and ran into a bunch of issues.

I've stopped trying to get them to work and just use a docker operator or
virtualenv operator when I need custom dependencies.

+1 on supporting dag serialization or remote fetching, would work around a
bunch of problems.

On Mon, Jun 10, 2019, 11:19 AM Gabriel Silk <gs...@dropbox.com.invalid>
wrote:

> +1 to a serialization scheme. I'm happy to give early feedback.
>
> On Mon, Jun 10, 2019 at 8:34 AM Dan Davydov <ddavy...@twitter.com.invalid>
> wrote:
>
> > I know the code around this is pretty hacky (if use_zip_file then...
> > instead of an abstraction). I know when it was added it was a bit
> > controversial, I would be +1 on removing it. That being said I feel the
> > entire DAG parsing process needs to be moved to the client-side (users
> who
> > write DAGs), with a serialization scheme. I'm working on a proposal for
> > this at the moment, and this would obviate even reading DAGs from the
> local
> > filesystem.
> >
> > Having everything zipped up gives us some assurance that when a DAG is
> > > updated, the whole thing is replaced as a single unit.
> >
> > I don't believe unzipping files gives this atomicity guarantee any more
> > than something like rsync --delete.
> >
>

Reply via email to