Copilot commented on code in PR #63733:
URL: https://github.com/apache/airflow/pull/63733#discussion_r2941535874


##########
airflow-core/tests/unit/api_fastapi/execution_api/versions/head/test_dag_runs.py:
##########
@@ -60,6 +60,7 @@ def test_trigger_dag_run(self, client, session, dag_maker):
         dag_run = session.scalars(select(DagRun).where(DagRun.run_id == 
run_id)).one()
         assert dag_run.conf == {"key1": "value1"}
         assert dag_run.logical_date == logical_date
+        assert dag_run.run_type == DagRunType.OPERATOR

Review Comment:
   The test queries DagRun by run_id only. `run_id` is only unique per DAG, so 
this can silently collide if other tests (or future additions in this test) 
create the same run_id for a different dag_id. Filter by both `dag_id` and 
`run_id` to make the test robust.



##########
airflow-core/src/airflow/ui/openapi-gen/requests/types.gen.ts:
##########
@@ -817,7 +817,7 @@ export type DagRunTriggeredByType = 'cli' | 'operator' | 
'rest_api' | 'ui' | 'te
 /**
  * Class with DagRun types.
  */
-export type DagRunType = 'backfill' | 'scheduled' | 'manual' | 
'asset_triggered' | 'asset_materialization';
+export type DagRunType = 'backfill' | 'scheduled' | 'manual' | 'operator' | 
'asset_triggered' | 'asset_materialization';
 

Review Comment:
   This file is marked as auto-generated by @hey-api/openapi-ts. To avoid the 
change being overwritten (and to keep diffs reproducible), please ensure this 
enum update comes from regenerating the OpenAPI client types (and that the 
source OpenAPI schema is updated accordingly), rather than hand-editing the 
generated output.



##########
airflow-core/src/airflow/api_fastapi/execution_api/routes/dag_runs.py:
##########
@@ -112,21 +112,20 @@ def trigger_dag_run(
             },
         )
 
-    # TODO: TriggerDagRunOperator also calls this route but creates MANUAL 
runs.
-    #  Consider a dedicated run type for operator-triggered runs.
-    if dm.allowed_run_types is not None and DagRunType.MANUAL not in 
dm.allowed_run_types:
+    if dm.allowed_run_types is not None and DagRunType.OPERATOR not in 
dm.allowed_run_types:
         raise HTTPException(
             status.HTTP_400_BAD_REQUEST,
             detail={
                 "reason": "denied_run_type",
-                "message": f"Dag with dag_id '{dag_id}' does not allow manual 
runs",
+                "message": f"Dag with dag_id '{dag_id}' does not allow 
operator-triggered runs",
             },
         )
 
     try:
         trigger_dag(
             dag_id=dag_id,
             run_id=run_id,
+            run_type=DagRunType.OPERATOR,

Review Comment:
   This changes the Execution API behavior/schema by introducing a new 
`DagRunType.OPERATOR` and returning/storing it for operator-triggered runs. 
Since the Execution API is Cadwyn-versioned, adding a new enum value typically 
requires a version bump and/or a VersionChange that downgrades 
`run_type='operator'` for older API versions (otherwise older Task SDK clients 
may fail to parse DagRun.run_type). Please add the appropriate Cadwyn 
versioning/migration entry alongside this change.
   



##########
task-sdk/src/airflow/sdk/api/datamodels/_generated.py:
##########
@@ -128,6 +128,7 @@ class DagRunType(str, Enum):
     BACKFILL = "backfill"
     SCHEDULED = "scheduled"
     MANUAL = "manual"
+    OPERATOR = "operator"
     ASSET_TRIGGERED = "asset_triggered"
     ASSET_MATERIALIZATION = "asset_materialization"
 

Review Comment:
   `task-sdk/.../_generated.py` is generated by datamodel-codegen (per the 
header). Please confirm this change was produced by regenerating from the 
updated Execution API OpenAPI schema (and that the API versioning story is 
consistent), rather than editing the generated file directly—otherwise it’s 
likely to drift/gets overwritten on the next regen.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to