[jira] [Assigned] (SPARK-39955) Improve LaunchTask process to avoid Stage failures caused by fail-to-send LaunchTask messages
[ https://issues.apache.org/jira/browse/SPARK-39955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mridul Muralidharan reassigned SPARK-39955: --- Assignee: Kai-Hsun Chen (was: Kai-Hsun Chen) > Improve LaunchTask process to avoid Stage failures caused by fail-to-send > LaunchTask messages > - > > Key: SPARK-39955 > URL: https://issues.apache.org/jira/browse/SPARK-39955 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Affects Versions: 3.4.0 >Reporter: Kai-Hsun Chen >Assignee: Kai-Hsun Chen >Priority: Major > Fix For: 3.4.0 > > > There are two possible reasons, including Network Failure and Task Failure, > to make RPC failures. > (1) Task Failure: The network is good, but the task causes the executor's JVM > crash. Hence, RPC fails. > (2) Network Failure: The executor works well, but the network between Driver > and Executor is broken. Hence, RPC fails. > We should handle these two different kinds of failure in different ways. > First, if the failure is Task Failure, we should increment the variable > `{{{}numFailures`{}}}. If the value of {{`numFailures`}} is larger than a > threshold, Spark will label the job failed. Second, if the failure is Network > Failure, we will not increment the variable `{{{}numFailures`{}}}. We will > just assign the task to a new executor. Hence, the job will not be recognized > as failed due to Network Failure. > However, currently, Spark recognizes every RPC failure as Task Failure. > Hence, it will cause extra Spark job failures. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-39955) Improve LaunchTask process to avoid Stage failures caused by fail-to-send LaunchTask messages
[ https://issues.apache.org/jira/browse/SPARK-39955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mridul Muralidharan reassigned SPARK-39955: --- Assignee: Kai-Hsun Chen (was: Mridul Muralidharan) > Improve LaunchTask process to avoid Stage failures caused by fail-to-send > LaunchTask messages > - > > Key: SPARK-39955 > URL: https://issues.apache.org/jira/browse/SPARK-39955 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Affects Versions: 3.4.0 >Reporter: Kai-Hsun Chen >Assignee: Kai-Hsun Chen >Priority: Major > Fix For: 3.4.0 > > > There are two possible reasons, including Network Failure and Task Failure, > to make RPC failures. > (1) Task Failure: The network is good, but the task causes the executor's JVM > crash. Hence, RPC fails. > (2) Network Failure: The executor works well, but the network between Driver > and Executor is broken. Hence, RPC fails. > We should handle these two different kinds of failure in different ways. > First, if the failure is Task Failure, we should increment the variable > `{{{}numFailures`{}}}. If the value of {{`numFailures`}} is larger than a > threshold, Spark will label the job failed. Second, if the failure is Network > Failure, we will not increment the variable `{{{}numFailures`{}}}. We will > just assign the task to a new executor. Hence, the job will not be recognized > as failed due to Network Failure. > However, currently, Spark recognizes every RPC failure as Task Failure. > Hence, it will cause extra Spark job failures. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-39955) Improve LaunchTask process to avoid Stage failures caused by fail-to-send LaunchTask messages
[ https://issues.apache.org/jira/browse/SPARK-39955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mridul Muralidharan reassigned SPARK-39955: --- Assignee: Mridul Muralidharan > Improve LaunchTask process to avoid Stage failures caused by fail-to-send > LaunchTask messages > - > > Key: SPARK-39955 > URL: https://issues.apache.org/jira/browse/SPARK-39955 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Affects Versions: 3.4.0 >Reporter: Kai-Hsun Chen >Assignee: Mridul Muralidharan >Priority: Major > Fix For: 3.4.0 > > > There are two possible reasons, including Network Failure and Task Failure, > to make RPC failures. > (1) Task Failure: The network is good, but the task causes the executor's JVM > crash. Hence, RPC fails. > (2) Network Failure: The executor works well, but the network between Driver > and Executor is broken. Hence, RPC fails. > We should handle these two different kinds of failure in different ways. > First, if the failure is Task Failure, we should increment the variable > `{{{}numFailures`{}}}. If the value of {{`numFailures`}} is larger than a > threshold, Spark will label the job failed. Second, if the failure is Network > Failure, we will not increment the variable `{{{}numFailures`{}}}. We will > just assign the task to a new executor. Hence, the job will not be recognized > as failed due to Network Failure. > However, currently, Spark recognizes every RPC failure as Task Failure. > Hence, it will cause extra Spark job failures. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-39955) Improve LaunchTask process to avoid Stage failures caused by fail-to-send LaunchTask messages
[ https://issues.apache.org/jira/browse/SPARK-39955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apache Spark reassigned SPARK-39955: Assignee: (was: Apache Spark) > Improve LaunchTask process to avoid Stage failures caused by fail-to-send > LaunchTask messages > - > > Key: SPARK-39955 > URL: https://issues.apache.org/jira/browse/SPARK-39955 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Affects Versions: 3.4.0 >Reporter: Kai-Hsun Chen >Priority: Major > > There are two possible reasons, including Network Failure and Task Failure, > to make RPC failures. > (1) Task Failure: The network is good, but the task causes the executor's JVM > crash. Hence, RPC fails. > (2) Network Failure: The executor works well, but the network between Driver > and Executor is broken. Hence, RPC fails. > We should handle these two different kinds of failure in different ways. > First, if the failure is Task Failure, we should increment the variable > `{{{}numFailures`{}}}. If the value of {{`numFailures`}} is larger than a > threshold, Spark will label the job failed. Second, if the failure is Network > Failure, we will not increment the variable `{{{}numFailures`{}}}. We will > just assign the task to a new executor. Hence, the job will not be recognized > as failed due to Network Failure. > However, currently, Spark recognizes every RPC failure as Task Failure. > Hence, it will cause extra Spark job failures. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-39955) Improve LaunchTask process to avoid Stage failures caused by fail-to-send LaunchTask messages
[ https://issues.apache.org/jira/browse/SPARK-39955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apache Spark reassigned SPARK-39955: Assignee: Apache Spark > Improve LaunchTask process to avoid Stage failures caused by fail-to-send > LaunchTask messages > - > > Key: SPARK-39955 > URL: https://issues.apache.org/jira/browse/SPARK-39955 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Affects Versions: 3.4.0 >Reporter: Kai-Hsun Chen >Assignee: Apache Spark >Priority: Major > > There are two possible reasons, including Network Failure and Task Failure, > to make RPC failures. > (1) Task Failure: The network is good, but the task causes the executor's JVM > crash. Hence, RPC fails. > (2) Network Failure: The executor works well, but the network between Driver > and Executor is broken. Hence, RPC fails. > We should handle these two different kinds of failure in different ways. > First, if the failure is Task Failure, we should increment the variable > `{{{}numFailures`{}}}. If the value of {{`numFailures`}} is larger than a > threshold, Spark will label the job failed. Second, if the failure is Network > Failure, we will not increment the variable `{{{}numFailures`{}}}. We will > just assign the task to a new executor. Hence, the job will not be recognized > as failed due to Network Failure. > However, currently, Spark recognizes every RPC failure as Task Failure. > Hence, it will cause extra Spark job failures. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org