yang227 opened a new issue, #14031: URL: https://github.com/apache/dolphinscheduler/issues/14031
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened When scheduling dag tasks for a large project, when there are many sub_process that are interdependent in the sub_process , there will be many sub_process that are successfully submitted and cannot obtain the subsequent status INFO] 2023-04-27 17:24:09.005 +0800 org.apache.dolphinscheduler.api.aspect.AccessLogAspect:[90] - REQUEST TRACE_ID:3cd82a4e-39b7-4bbd-a7e9-9d9ba163a5fe, LOGIN_USER:admin, URI:/dolphinscheduler/projects/9363509786496/task-instances, METHOD:GET, HANDLER:org.apache.dolphinscheduler.api.controller.TaskInstanceController.queryTaskListPaging, ARGS:{processInstanceId=0, executorName=, searchVal=, projectCode=9363509786496, pageNo=2, stateType=null, host=, pageSize=10, taskName=null, startTime=, endTime=, processInstanceName=} [INFO] 2023-04-27 17:24:09.694 +0800 org.apache.dolphinscheduler.api.aspect.AccessLogAspect:[90] - REQUEST TRACE_ID:c2d01542-67a9-4153-b9d0-ec29fc997d30, LOGIN_USER:admin, URI:/dolphinscheduler/projects/9363509786496/task-instances, METHOD:GET, HANDLER:org.apache.dolphinscheduler.api.controller.TaskInstanceController.queryTaskListPaging, ARGS:{processInstanceId=0, executorName=, searchVal=, projectCode=9363509786496, pageNo=3, stateType=null, host=, pageSize=10, taskName=null, startTime=, endTime=, processInstanceName=} ### What you expected to happen After the task is successfully submitted, the subsequent status cannot proceed normally, which is abnormal ### How to reproduce Create hundreds of sub_process , schedule hundreds of tasks within a large sub_process , and our situation will arise ### Anything else The askid related to t that we found in the metadata MySQL [dolphinscheduler_305]> select * -> from t_ds_task_instance where id = '1653'; +------+-----------------------------------------------------------------------------+-------------+---------------+---------------------- | id | name | task_type | task_code | task_definition_version | process_instance_id | state | submit_time | start_time | end_time | host | execute_path | log_path | alert_flag | retry_times | pid | app_link | task_params | flag | retry_interval | max_retry_times | task_instance_priority | worker_group | environment_code | environment_config | executor_id | first_submit_time | delay_time | var_pool | task_group_id | dry_run | +------+-----------------------------------------------------------------------------+-------------+---------------+---------------------- | 1653 | ods_pvs_middle_data__product_marketing_information_import_20230427170420345 | SUB_PROCESS | 9363950124043 | 1 | 683 | 0 | 2023-04-27 17:07:12 | NULL | NULL | NULL | NULL | NULL | 0 | 0 | 0 | NULL | {"appName":null,"connParams":null,"datasource":0,"deployMode":null,"displayRows":10,"groupId":0,"localParams":[],"others":null,"postStatements":[],"preStatements":[],"processDefinitionCode":9363509988864,"programType":null,"rawScript":"show tables","resourceList":[],"segmentSeparator":null,"sendEmail":false,"sparkVersion":null,"sql":null,"sqlType":null,"switchResult":"{}","title":null,"type":null,"udfs":null,"waitStartTimeout":null,"conditionResult":"{\"failedNode\":[],\"successNode\":[]}","dependence":"{}"} | 1 | 1 | 0 | 2 | default | -1 | NULL | 1 | 2023-04-27 17:07:12 | 0 | NULL | 0 | 0 | +------+-----------------------------------------------------------------------------+-------------+---------------+---------------------- 1 row in set (0.00 sec) ### Version 3.0.x ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
