ferruzzi commented on code in PR #55088:
URL: https://github.com/apache/airflow/pull/55088#discussion_r2320316862
##########
airflow-core/src/airflow/models/deadline.py:
##########
@@ -355,6 +355,31 @@ def _evaluate_with(self, *, session: Session, **kwargs:
Any) -> datetime:
return _fetch_from_db(DagRun.queued_at, session=session, **kwargs)
+ class AverageRuntimeDeadline(BaseDeadlineReference):
+ """A deadline that calculates the average runtime from past DAG
runs."""
+
+ required_kwargs = {"dag_id"}
+
+ @provide_session
+ def _evaluate_with(self, *, session: Session, **kwargs: Any) ->
datetime:
+ from airflow.models import DagRun
+
+ dag_id = kwargs["dag_id"]
+
+ # Query for completed DAG runs with both start and end dates
+ query = select(func.avg(func.extract("epoch", DagRun.end_date -
DagRun.start_date))).filter(
+ DagRun.dag_id == dag_id, DagRun.start_date.isnot(None),
DagRun.end_date.isnot(None)
+ )
+
+ avg_seconds = session.execute(query).scalar()
Review Comment:
We discussed a few variants of this including rolling averages, average by
dag version, average since a given datetime, average since a given dagrun_id,
etc. I think we decided to do this as the basic one then add a parameterized
version that accepts a dict of conditions that get assembled into `WHERE`
clauses. Would you be cool with that approach?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]