> To me, I'm always working from a user perspective. My goal is to make
their lives easier, their deployments easier, the product the most
enjoyable for them to use. To me, the best user experience is that they
should enable bundle versioning and it should just work with as little or
no extra steps
Personally I dislike things like assert_called_once_with etc. since they are
easy to miss when you scan a test to see what they are trying to check. An
'assert' keyword stands out (it’s always the first word in the line),
especially with syntax highlighting.
I do agree the proposed Pytest style
Dear Apache Airflow Community,I’m reaching out to explore how we can
leverage Apache Airflow to optimize data sourcing processes. We are
building a data engine platform for aggregating diverse data sources, then
generating actionable insights and we believe Airflow’s orchestration
capabilities coul
Agreed!
Once the PR is up, we can have these implementation level discussions
over there. Good chat however!
Thanks & Regards,
Amogh Desai
On Wed, Jul 9, 2025 at 3:56 PM Jarek Potiuk wrote:
> Yeah. I think extracting one-by-one, feature-by-feature that we want to
> share to a separate distrib
Hello Apache Airflow Community,
This is a call for the vote to release Helm Chart version 1.18.0.
The release candidate is available at:
https://dist.apache.org/repos/dist/dev/airflow/helm-chart/1.18.0rc2/
airflow-chart-1.18.0-source.tar.gz - is the "main source release" that
comes with INSTALL
I'm a bit late to the party, and really only reiterating what has already been
said, but of the two examples (original and your rewrite, I prefer the
original. I think as a general rule, we tend to use the assert_called_once,
etc with mocks butt he asserts with non-mocked variables.
I am all f
Hey fellow Airflowers,
The release candidates for *Apache Airflow 3.0.3rc5 *and *Task SDK
1.0.3rc5* are
now available for testing!
This email is calling for a vote on the release, which will last at least
until 14th July and until 3 binding +1 votes have been received.
Consider this my +1 bindin
To me, I'm always working from a user perspective. My goal is to make their
lives easier, their deployments easier, the product the most enjoyable for them
to use. To me, the best user experience is that they should enable bundle
versioning and it should just work with as little or no extra step
My 2ct on the discussions are similar like the opinions before.
From my Edge3 experience migrating DB from provider - even if
technically enabled - is a bit of a pain. Adding a lot of boilerplate,
you need to consider your provider should also still be compatible with
AF2 (I assume) and once a
+1 binding, will be happy to help in the implementation :)
Shahar
On Mon, Jul 7, 2025 at 9:27 PM Jarek Potiuk wrote:
> Hello Airflow community,
>
> I would like to call a vote on "reloaded" version of the AIP
>
> https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-67+Multi-team+deployment+
What about the DynamoDB idea ? What you are trying to trade-off is "writing
to airflow metadata DB" with "writing to another DB" really. So yes it is -
another thing you will need to have access to write to - other than Airflow
DB, but it's really the question should the boundaries be on "Everythin
Thanks for engaging folks!
I don’t love the idea of using another bucket. For one, this means Airflow
needs write access to S3 which is not ideal; some users/customers are very
sensitive about ever allowing write access to things. And two, you will
commonly get issues with a design that leaks s
Thanks Jarek for the updates! Updates looks good!
One of the most anticipated and complex feature.
+1 binding
Best regards,
Bugra Ozturk
On Wed, 9 Jul 2025, 00:48 Pavankumar Gopidesu,
wrote:
> Thanks Jarek for updated work,
>
> Took a bit of time and re-read again, now it's more concise and
> Another option also would be Using dynamodb table? that also supports
snapshots and i feel it works very well with state management.
Yep that would also work.
Anything "Amazon" to keep state would do. I think that it should be our
"default" approach that if we have to keep state and the state i
Agree another s3 bucket also works here
Another option also would be Using dynamodb table? that also supports
snapshots and i feel it works very well with state management.
Pavan
On Wed, Jul 9, 2025 at 2:06 PM Jarek Potiuk wrote:
> One of the options would be to use a similar approach as terr
One of the options would be to use a similar approach as terraform uses -
i.e. use dedicated "metadata" state storage in a DIFFERENT s3 bucket than
DAG files. Since we know there must be an S3 available (obviously) - it
seems not too excessive to assume that there might be another bucket,
independe
+1 (binding), Checked SVN, Docker, reproducibility, licence, signature,
checksums -> all good.
On Wed, Jul 9, 2025 at 1:42 PM Ankit Chaurasia wrote:
> +1 non-binding to release 9.10.0rc1.
>
> I ran the example DAGs for S3 along with deferrable modes.
>
> Regards,
> *Ankit Chaurasia*
>
>
>
>
>
>
+1 non-binding to release 9.10.0rc1.
I ran the example DAGs for S3 along with deferrable modes.
Regards,
*Ankit Chaurasia*
On Wed, Jul 9, 2025 at 12:09 PM Amogh Desai
wrote:
> +1 binding.
>
> - Checked SVN
> - Checked in Docker
> - Checked reproducible package builds
> - Checked licenses
Yeah. I think extracting one-by-one, feature-by-feature that we want to
share to a separate distribution is the best approach - it will actually
also help with the "__init__.py" cleanup - because almost by definition -
those distributions will not be able to "reach" outside - i.e. they only
can be
Probably, you make a valid point.
Maybe this is an implementation detail, so we could figure it out as we
start on a POC and factor in these things
as we move along?
But from an initial guess, I would think that execution time related items
(if we manage to enumerate them) would be something
that
20 matches
Mail list logo