This is an automated email from the ASF dual-hosted git repository.
potiuk pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/airflow-site.git
The following commit(s) were added to refs/heads/main by this push:
new 3d186ec3cb Fixing some typos in various texts (#1252)
3d186ec3cb is described below
commit 3d186ec3cb1ce888eb9a945cb675abf97c19295a
Author: Didier Durand <[email protected]>
AuthorDate: Thu Oct 23 16:54:56 2025 +0200
Fixing some typos in various texts (#1252)
---
CONTRIBUTE.md | 4 ++--
landing-pages/site/content/en/blog/airflow-1.10.10/index.md | 2 +-
landing-pages/site/content/en/blog/airflow-2.10.0/index.md | 2 +-
landing-pages/site/content/en/blog/airflow-2.3.0/index.md | 2 +-
landing-pages/site/content/en/blog/airflow-2.4.0/index.md | 2 +-
landing-pages/site/content/en/blog/airflow-2.5.0/index.md | 2 +-
landing-pages/site/content/en/blog/airflow-2.7.0/index.md | 2 +-
landing-pages/site/content/en/blog/airflow-survey-2022/index.md | 4 ++--
landing-pages/site/content/en/ecosystem/_index.md | 2 +-
landing-pages/site/content/en/use-cases/seniorlink.md | 2 +-
10 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/CONTRIBUTE.md b/CONTRIBUTE.md
index 64e479c3a8..63621f759e 100644
--- a/CONTRIBUTE.md
+++ b/CONTRIBUTE.md
@@ -383,7 +383,7 @@ If your meetup group isn't on the list, add it following
the format of existing
# How to release new documentation
-Building documentation for the Apache Airlfow project also requires Python3.6
with pip and graphviz. You also need to have additional `apache/airflow`
repository available.
+Building documentation for the Apache Airflow project also requires Python3.6
with pip and graphviz. You also need to have additional `apache/airflow`
repository available.
### Prerequisite Tasks
@@ -630,7 +630,7 @@ gcloud compute instances delete "${GCP_INSTANCE_NAME}"
--zone="${GCP_ZONE}"
## Use RAM disk for build
-If you wanna create RAM disk, run following command:
+If you want to create RAM disk, run following command:
```bash
sudo mkdir -p /mnt/ramdisk && sudo mount -t tmpfs -o size=16g tmpfs
/mnt/ramdisk
```
diff --git a/landing-pages/site/content/en/blog/airflow-1.10.10/index.md
b/landing-pages/site/content/en/blog/airflow-1.10.10/index.md
index 21ec4d1db8..b693f70ed6 100644
--- a/landing-pages/site/content/en/blog/airflow-1.10.10/index.md
+++ b/landing-pages/site/content/en/blog/airflow-1.10.10/index.md
@@ -117,7 +117,7 @@ Executor. This should significantly improve execution time
and resource usage.
When triggering a DAG from the CLI or the REST API, it s possible to pass
configuration for the DAG run as a JSON blob.
From Airflow 1.10.10, when a user clicks on Trigger Dag button, a new screen
confirming the trigger request, and allowing the user to pass a JSON
configuration
-blob would be show.
+blob would be shown.
**Screenshot**:

diff --git a/landing-pages/site/content/en/blog/airflow-2.10.0/index.md
b/landing-pages/site/content/en/blog/airflow-2.10.0/index.md
index 20661df07f..aebe436aaf 100644
--- a/landing-pages/site/content/en/blog/airflow-2.10.0/index.md
+++ b/landing-pages/site/content/en/blog/airflow-2.10.0/index.md
@@ -23,7 +23,7 @@ I'm happy to announce that Apache Airflow 2.10.0 is now
available, bringing an a
With the release of Airflow 2.10.0, we’ve introduced the collection of basic
telemetry data, as outlined
[here](https://airflow.apache.org/docs/apache-airflow/2.10.0/faq.html#does-airflow-collect-any-telemetry-data).
This data will play a crucial role in helping Airflow maintainers gain a
deeper understanding of how Airflow is utilized across various deployments. The
insights derived from this information are invaluable in guiding the
prioritization of patches, minor releases, and securi [...]
-For those who prefer not to participate in data collection, deployments can
easily opt-out by setting the `[usage_data_collection] enabled` option to
`False` or by using the `SCARF_ANALYTICS=false` environment variable.
+For those who prefer not to participate in data collection, deployments can
easily opt out by setting the `[usage_data_collection] enabled` option to
`False` or by using the `SCARF_ANALYTICS=false` environment variable.
## Multiple Executor Configuration (formerly "Hybrid Execution")
diff --git a/landing-pages/site/content/en/blog/airflow-2.3.0/index.md
b/landing-pages/site/content/en/blog/airflow-2.3.0/index.md
index b5d447e2e2..38a77b5157 100644
--- a/landing-pages/site/content/en/blog/airflow-2.3.0/index.md
+++ b/landing-pages/site/content/en/blog/airflow-2.3.0/index.md
@@ -111,7 +111,7 @@ More information can be found here: [Airflow `db downgrade`
and Offline generati
## Reuse of decorated tasks
-You can now reuse decorated tasks across your dag files. A decorated task has
an `override` method that allows you to override it's arguments.
+You can now reuse decorated tasks across your dag files. A decorated task has
an `override` method that allows you to override its arguments.
Here's an example:
diff --git a/landing-pages/site/content/en/blog/airflow-2.4.0/index.md
b/landing-pages/site/content/en/blog/airflow-2.4.0/index.md
index 36cef954eb..2d8e856dfb 100644
--- a/landing-pages/site/content/en/blog/airflow-2.4.0/index.md
+++ b/landing-pages/site/content/en/blog/airflow-2.4.0/index.md
@@ -67,7 +67,7 @@ For more information on datasets, see the [documentation on
Data-aware schedulin
As much as we wish all python libraries could be used happily together that
sadly isn't the world we live in, and sometimes there are conflicts when trying
to install multiple python libraries in an Airflow install -- right now we hear
this a lot with `dbt-core`.
-To make this easier we have introduced `@task.external_python` (and the
matching `ExternalPythonOperator`) that lets you run an python function as an
Airflow task in a pre-configured virtual env, or even a whole different python
version. For example:
+To make this easier we have introduced `@task.external_python` (and the
matching `ExternalPythonOperator`) that lets you run a python function as an
Airflow task in a pre-configured virtual env, or even a whole different python
version. For example:
```python
@task.external_python(python='/opt/venvs/task_deps/bin/python')
diff --git a/landing-pages/site/content/en/blog/airflow-2.5.0/index.md
b/landing-pages/site/content/en/blog/airflow-2.5.0/index.md
index 8a13146bdb..96f64508e3 100644
--- a/landing-pages/site/content/en/blog/airflow-2.5.0/index.md
+++ b/landing-pages/site/content/en/blog/airflow-2.5.0/index.md
@@ -27,7 +27,7 @@ This quicker release cadence is a departure from our previous
habit of releasing
When we released Dataset aware scheduling in September we knew that the tools
we gave to manage the Datasets were very much a Minimum Viable Product, and in
the last two months the committers and contributors have been hard at work at
making the UI much more usable when it comes to Datasets.
-But we we aren't done yet - keep an eye out for more improvements coming over
the next couple of releases too.
+But we aren't done yet - keep an eye out for more improvements coming over the
next couple of releases too.
## Greatly improved `airflow dags test` command
diff --git a/landing-pages/site/content/en/blog/airflow-2.7.0/index.md
b/landing-pages/site/content/en/blog/airflow-2.7.0/index.md
index f60068b173..a574531b79 100644
--- a/landing-pages/site/content/en/blog/airflow-2.7.0/index.md
+++ b/landing-pages/site/content/en/blog/airflow-2.7.0/index.md
@@ -59,7 +59,7 @@ With 2.7.0, OpenLineage changes from a plugin implementation
maintained in the O
## Some executors moved into providers
-Some of the executors that were shipped in core Airflow have moved into their
respective providers for Airflow 2.7.0. The great benefit of this is to allow
faster bug-fix releases as providers are released independently from core.
+Some of the executors that were shipped in core Airflow have moved into their
respective providers for Airflow 2.7.0. The great benefit of this is to allow
faster bug-fix releases as providers are released independently of core.
The following providers have been moved and require certain minimum providers
versions:
* In order to use Celery executors, install the [celery provider version
3.3.0+](https://pypi.org/project/apache-airflow-providers-celery/)
diff --git a/landing-pages/site/content/en/blog/airflow-survey-2022/index.md
b/landing-pages/site/content/en/blog/airflow-survey-2022/index.md
index 0cc169b17f..867c97657a 100644
--- a/landing-pages/site/content/en/blog/airflow-survey-2022/index.md
+++ b/landing-pages/site/content/en/blog/airflow-survey-2022/index.md
@@ -27,7 +27,7 @@ The raw response data will be made available here soon, in
the meantime, feel fr
### Deployments
-- 85% of the Airflow users have between 1 to 7 active Airflow instances. 62.5%
of the Airflow users have between 11 to 250 DAGs in their largest Airflow
instance. 75% of the surveyed Airflow users have between 1 to 100 tasks per DAG.
+- 85% of the Airflow users have between 1 and 7 active Airflow instances.
62.5% of the Airflow users have between 11 and 250 DAGs in their largest
Airflow instance. 75% of the surveyed Airflow users have between 1 and 100
tasks per DAG.
- Close to 85% of users use one of the Airflow 2 versions, 9.2% users still
use 1.10.15, while the remaining 6.3% are still using olderAirflow 1 versions.
The good news is that the majority of users on Airflow 1 are planning migration
to Airflow 2 quite soon, with resources and capacity being the main blockers.
- In comparison to results from
[2020](https://airflow.apache.org/blog/airflow-survey-2020/#overview-of-the-user),
more users were interested in monitoring in general and specifically in using
tools such as external monitoring services (40.7%, up from 29.6%) and
information from metabase (35.7%, up from 25.1%).
- Celery (52.7%) and Kubernetes (39.4%) are the most common executors used.
@@ -237,7 +237,7 @@ Celery (52.7%) and Kubernetes (39.4%) are the most common
executors used. Celery
| 1 | 26 | 18.2% |
| 6-10 | 25 | 17.5% |
-Amongst Celery executor users who responded to the survey, close to half the
number (44.8%) have between 2 to 5 workers in their largest Airflow instance.
It’s notable that nearly a fifth (19.6%) have more than 10 workers.
+Amongst Celery executor users who responded to the survey, close to half the
number (44.8%) have between 2 and 5 workers in their largest Airflow instance.
It’s notable that nearly a fifth (19.6%) have more than 10 workers.
### Which version of Airflow do you currently use? (single choice)
diff --git a/landing-pages/site/content/en/ecosystem/_index.md
b/landing-pages/site/content/en/ecosystem/_index.md
index 59a7677c9e..81b26b8054 100644
--- a/landing-pages/site/content/en/ecosystem/_index.md
+++ b/landing-pages/site/content/en/ecosystem/_index.md
@@ -213,7 +213,7 @@ Apache Airflow releases the [Official Apache Airflow
Community Chart](https://ai
[airflow-ha](https://github.com/airflow-laminar/airflow-ha) - High
Availability (HA) DAG Utility
-[airflow-supervisor](https://github.com/airflow-laminar/airflow-supervisor) -
Easy-to-use [supervisor](http://supervisord.org) integration for long running
or "always on" DAGs
+[airflow-supervisor](https://github.com/airflow-laminar/airflow-supervisor) -
Easy-to-use [supervisor](http://supervisord.org) integration for long-running
or "always on" DAGs
[airflow-balancer](https://github.com/airflow-laminar/airflow-balancer) -
Utilities for tracking hosts and ports and load balancing DAGs
diff --git a/landing-pages/site/content/en/use-cases/seniorlink.md
b/landing-pages/site/content/en/use-cases/seniorlink.md
index 1376a14257..65cf2b1c8f 100644
--- a/landing-pages/site/content/en/use-cases/seniorlink.md
+++ b/landing-pages/site/content/en/use-cases/seniorlink.md
@@ -13,7 +13,7 @@ Here at Seniorlink, we provide services, support, and
technology that engages fa
We had built a robust stack of batch processes to deliver value to the
business, deploying these data services in AWS using a mixture of EMR, ECS,
Lambda, and EC2. Moving fast, as many new endeavors do, we ultimately ended up
with one monolithic batch process with many smaller satellite jobs. Given the
scale and quantity of jobs, we began to lose transparency as to what was
happening. Additionally, many jobs were launched in a single EMR cluster and so
tightly coupled that a failure in o [...]
-We were beginning to lose precious time manually managing the schedules via
AWS Datapiplines, AWS Lambdas, and ECS Tasks. Much of our development effort
was spent waiting for the monolith to complete running to examine a smaller job
within. Our best chance at keeping system transparency was active documentation
in our internal wiki.
+We were beginning to lose precious time manually managing the schedules via
AWS Datapipelines, AWS Lambdas, and ECS Tasks. Much of our development effort
was spent waiting for the monolith to complete running to examine a smaller job
within. Our best chance at keeping system transparency was active documentation
in our internal wiki.
##### How did Apache Airflow help to solve this problem?
Airflow gave us a way to orchestrate our disparate tools into a single place.
Instead of dealing with multiple schedules, we have a straightforward UI to
consider. We gained a great deal of transparency, being able to monitor the
status of tasks, re-run or restart tasks from any given point in a workflow,
and manage the dependencies between jobs using DAGs. We were able to decouple
our monolith and schedule the resulting smaller tasks confidently.