[ https://issues.apache.org/jira/browse/SPARK-33711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17246092#comment-17246092 ]
Attila Zsolt Piros edited comment on SPARK-33711 at 12/8/20, 7:21 PM: ---------------------------------------------------------------------- Yes, it does. I have updated the affected versions accordingly. Thanks [~dongjoon]. was (Author: attilapiros): Yes, it does. I have updated the affected versions accordingly. > Race condition in Spark k8s Pod lifecycle manager that leads to shutdowns > -------------------------------------------------------------------------- > > Key: SPARK-33711 > URL: https://issues.apache.org/jira/browse/SPARK-33711 > Project: Spark > Issue Type: Sub-task > Components: Kubernetes > Affects Versions: 2.3.4, 2.4.7, 3.0.0, 3.1.0, 3.2.0 > Reporter: Attila Zsolt Piros > Priority: Major > > Watching a POD (ExecutorPodsWatchSnapshotSource) informs about single POD > changes which could wrongfully lead to detecting of missing PODs (PODs known > by scheduler backend but missing from POD snapshots) by the executor POD > lifecycle manager. > A key indicator of this is seeing this log msg: > "The executor with ID [some_id] was not found in the cluster but we didn't > get a reason why. Marking the executor as failed. The executor may have been > deleted but the driver missed the deletion event." > So one of the problem is running the missing POD detection even when a single > pod is changed without having a full consistent snapshot about all the PODs > (see ExecutorPodsPollingSnapshotSource). The other could be a race between > the executor POD lifecycle manager and the scheduler backend. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org