[ 
https://issues.apache.org/jira/browse/AIRFLOW-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

t oo updated AIRFLOW-5006:
--------------------------
    Description: 
if a dagrun fails (and so some rows go to task_fail table in metastore) but 
then after running 'clear' from cli to restart the dagrun (and its successful 
run) the rows from metastore's task_fail table get deleted. is this expected? 
or is there some way i can keep history of all failures in metastore?

 

even history of old successful runs are lost when doing clear for same exec date

  was:if a dagrun fails (and so some rows go to task_fail table in metastore) 
but then after running 'clear' from cli to restart the dagrun (and its 
successful run) the rows from metastore's task_fail table get deleted. is this 
expected? or is there some way i can keep history of all failures in metastore?


> Need task_failures audit trail for failed run in metastore db even after 
> running 'clear' results in next DagRun successful
> --------------------------------------------------------------------------------------------------------------------------
>
>                 Key: AIRFLOW-5006
>                 URL: https://issues.apache.org/jira/browse/AIRFLOW-5006
>             Project: Apache Airflow
>          Issue Type: Improvement
>          Components: database
>    Affects Versions: 1.10.3, 1.10.5
>         Environment: mysql RDS metastore, localexecutor
>            Reporter: t oo
>            Priority: Major
>
> if a dagrun fails (and so some rows go to task_fail table in metastore) but 
> then after running 'clear' from cli to restart the dagrun (and its successful 
> run) the rows from metastore's task_fail table get deleted. is this expected? 
> or is there some way i can keep history of all failures in metastore?
>  
> even history of old successful runs are lost when doing clear for same exec 
> date



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to