Github user MasterDDT commented on the issue:

    https://github.com/apache/spark/pull/14072
  
    Yeah its in `StageInfo.stageId` from the `SparkListener` events. Our use 
case is that we use FIFO scheduling (jobs fully utilize the cluster so parallel 
doesnt help), but I want to listen for the job/stage start and log/kill ones 
that take too long. From what I can tell the `DagScheduler` doesnt have some 
built-in job/stage timeout.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to