dondaum commented on code in PR #62984:
URL: https://github.com/apache/airflow/pull/62984#discussion_r2945619055
##########
providers/amazon/src/airflow/providers/amazon/aws/executors/batch/batch_executor.py:
##########
@@ -337,18 +357,18 @@ def attempt_submit_jobs(self):
self.log.error(
(
"This job has been unsuccessfully attempted too
many times (%s). "
- "Dropping the task. Reason: %s"
+ "Dropping the workload. Reason: %s"
),
attempt_number,
failure_reason,
)
self.log_task_event(
event="batch job submit failure",
extra=f"This job has been unsuccessfully attempted too
many times ({attempt_number}). "
- f"Dropping the task. Reason: {failure_reason}",
- ti_key=key,
+ f"Dropping the workload. Reason: {failure_reason}",
+ ti_key=workload_key,
)
- self.fail(key=key)
+ self.fail(key=workload_key)
Review Comment:
Great find. I followed the relevant code. The scheduler uses this log queue
to write a `Log()` entry. The log itself can be initialized without a task
instance, but the Executor method expects a task instance key `def
log_task_event(self, *, event: str, extra: str, ti_key: TaskInstanceKey)`.
So I'm wondering whether we should either adjust the ` def log_task_event`()
to accept both keys, which would also perhaps require a change to a different
executor, or whether we should remove the callback from this log queue.
@ferruzzi any thoughts on it ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]