[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2016-01-02 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-8119:
-
Labels:   (was: backport-needed)

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>Assignee: Andrew Or
>Priority: Critical
> Fix For: 1.5.0
>
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2015-09-10 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-8119:
-
Target Version/s: 1.4.2  (was: 1.4.2, 1.5.1)

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>Assignee: Andrew Or
>Priority: Critical
>  Labels: backport-needed
> Fix For: 1.5.0
>
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2015-09-10 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-8119:
-
Target Version/s: 1.4.2, 1.5.1  (was: 1.5.1)

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>Assignee: Andrew Or
>Priority: Critical
>  Labels: backport-needed
> Fix For: 1.5.0
>
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2015-09-10 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-8119:
-
Target Version/s: 1.5.1  (was: 1.4.2, 1.5.0)

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>Assignee: Andrew Or
>Priority: Critical
>  Labels: backport-needed
> Fix For: 1.5.0
>
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2015-07-28 Thread Josh Rosen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Rosen updated SPARK-8119:
--
Labels: backport-needed  (was: )

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>Assignee: Andrew Or
>Priority: Critical
>  Labels: backport-needed
> Fix For: 1.5.0
>
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2015-07-16 Thread Andrew Or (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or updated SPARK-8119:
-
Fix Version/s: 1.5.0

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>Assignee: Andrew Or
>Priority: Critical
> Fix For: 1.5.0
>
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2015-07-16 Thread Andrew Or (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or updated SPARK-8119:
-
Target Version/s: 1.4.2, 1.5.0  (was: 1.5.0)

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>Assignee: Andrew Or
>Priority: Critical
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2015-07-16 Thread Andrew Or (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or updated SPARK-8119:
-
Target Version/s: 1.5.0

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>Assignee: Andrew Or
>Priority: Critical
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2015-07-14 Thread Andrew Or (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or updated SPARK-8119:
-
Assignee: Andrew Or

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>Assignee: Andrew Or
>Priority: Critical
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2015-07-14 Thread Andrew Or (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or updated SPARK-8119:
-
Priority: Critical  (was: Major)

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>Priority: Critical
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2015-06-29 Thread Andrew Or (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or updated SPARK-8119:
-
Component/s: (was: Scheduler)

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2015-06-29 Thread Andrew Or (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or updated SPARK-8119:
-
Component/s: Spark Core

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not adjust application executor resources

2015-06-29 Thread Andrew Or (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or updated SPARK-8119:
-
Summary: HeartbeatReceiver should not adjust application executor resources 
 (was: HeartbeatReceiver should not call sc.killExecutor)

> HeartbeatReceiver should not adjust application executor resources
> --
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
>  Issue Type: Bug
>  Components: Scheduler
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>
> DynamicAllocation will set the total executor to a little number when it 
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no 
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it 
> wants after calling sc.killExecutor. Even if dynamic allocation is not 
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in 
> HeartbeatReceiver. The intention of the method is to permanently adjust the 
> number of executors the application will get. In HeartbeatReceiver, however, 
> this is used as a best-effort mechanism to ensure that the timed out executor 
> is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org