zhaow-de opened a new issue, #39413: URL: https://github.com/apache/airflow/issues/39413
### Apache Airflow version 2.9.0 ### If "Other Airflow 2 version" selected, which one? _No response_ ### What happened? Decorated operators using the classic API trigger warning message "<SomeOperator>.execute cannot be called outside TaskInstance!" ### What you think should happen instead? PR https://github.com/apache/airflow/pull/37937 introduced a new check to avoid the mixed usage of classic and decorated operators. Because a boundary condition of checking the decorators is missing, the operators cannot be decorated if a DAG uses the classic API. Hopefully, the check should only be triggered if the operator is decorated with `airflow.decorators.task` ### How to reproduce Run the DAG below: ```python from airflow import DAG from airflow.operators.python import PythonOperator def deco(cls): orig_init = cls.__init__ def new_init(self, *args, default_args=None, **kwargs): orig_init(self, *args, **kwargs) self.default_args = default_args cls.__init__ = cls._apply_defaults(new_init) return cls @deco class AlloyPythonOperator(PythonOperator): def execute(self, context): super().execute(context) def no_ops(): pass with DAG( dag_id="test-dag", catchup=False, ): AlloyPythonOperator( task_id="trigger-execute", python_callable=no_ops, ) ``` ### Operating System Ubuntu 22.04 LTS, but the issue is OS independant ### Versions of Apache Airflow Providers core ### Deployment Other Docker-based deployment ### Deployment details _No response_ ### Anything else? _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org