[
https://issues.apache.org/jira/browse/SPARK-22976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Russell Spitzer updated SPARK-22976:
------------------------------------
Description:
Spark Standalone worker cleanup finds directories to remove with a listFiles
command
This includes both application directories and driver directories from cluster
mode submitted applications.
A directory is considered to not be part of a running app if the worker does
not have an executor with a matching ID.
https://github.com/apache/spark/blob/v2.2.1/core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala#L432
{code}
val appIds = executors.values.map(_.appId).toSet
val isAppStillRunning = appIds.contains(appIdFromDir)
{code}
If a driver has been started on a node, but all of the executors are on other
nodes, the worker running the driver will always assume that the driver
directory is not part of a running app.
Consider a two node spark cluster with Worker A and Worker B where each node
has a single core available. We submit our application in deploy mode cluster,
the driver begins running on Worker A while the Executor starts on B.
Worker A has a cleanup triggered and looks and finds it has a directory
{code}
/var/lib/spark/worker/driver-20180105234824-0000
{code}
Worker A check's it's executor list and finds no entries which match this since
it has no corresponding executors for this application. Worker A then removes
the directory even though it may still be actively running.
I think this could be fixed by modifying line 432 to be
{code}
val appIds = executors.values.map(_.appId).toSet ++
drivers.values.map(_.driverId)
{code}
I'll run a test and submit a PR soon.
was:
Spark Standalone worker cleanup finds directories to remove with a listFiles
command
This includes both application directories and driver directories from cluster
mode submitted applications.
A directory is considered to not be part of a running app if the worker does
not have an executor with a matching ID.
https://github.com/apache/spark/blob/v2.2.1/core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala#L432
{code}
val appIds = executors.values.map(_.appId).toSet
val isAppStillRunning = appIds.contains(appIdFromDir)
{code}
If a driver has been started on a node but all of the executors are on other
workers the worker will always assume that the driver directory is not-running.
Consider a two node spark cluster with Worker A and Worker B where each node
has a single core available. We submit our application in deploy mode cluster,
the driver begins running on Worker A while the Executor starts on B.
Worker A has a cleanup triggered and looks and finds it has a directory
{code}
/var/lib/spark/worker/driver-20180105234824-0000
{code}
Worker A check's it's executor list and finds no entries which match this since
it has no corresponding executors for this application. Worker A then removes
the directory even though it may still be actively running.
I think this could be fixed by modifying line 432 to be
{code}
val appIds = executors.values.map(_.appId).toSet ++
drivers.values.map(_.driverId)
{code}
I'll run a test and submit a PR soon.
> Worker cleanup can remove running driver directories
> ----------------------------------------------------
>
> Key: SPARK-22976
> URL: https://issues.apache.org/jira/browse/SPARK-22976
> Project: Spark
> Issue Type: Bug
> Components: Deploy, Spark Core
> Affects Versions: 1.0.2
> Reporter: Russell Spitzer
>
> Spark Standalone worker cleanup finds directories to remove with a listFiles
> command
> This includes both application directories and driver directories from
> cluster mode submitted applications.
> A directory is considered to not be part of a running app if the worker does
> not have an executor with a matching ID.
> https://github.com/apache/spark/blob/v2.2.1/core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala#L432
> {code}
> val appIds = executors.values.map(_.appId).toSet
> val isAppStillRunning = appIds.contains(appIdFromDir)
> {code}
> If a driver has been started on a node, but all of the executors are on other
> nodes, the worker running the driver will always assume that the driver
> directory is not part of a running app.
> Consider a two node spark cluster with Worker A and Worker B where each node
> has a single core available. We submit our application in deploy mode
> cluster, the driver begins running on Worker A while the Executor starts on B.
> Worker A has a cleanup triggered and looks and finds it has a directory
> {code}
> /var/lib/spark/worker/driver-20180105234824-0000
> {code}
> Worker A check's it's executor list and finds no entries which match this
> since it has no corresponding executors for this application. Worker A then
> removes the directory even though it may still be actively running.
> I think this could be fixed by modifying line 432 to be
> {code}
> val appIds = executors.values.map(_.appId).toSet ++
> drivers.values.map(_.driverId)
> {code}
> I'll run a test and submit a PR soon.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]