Why would the spark master run out of RAM if I have too many slaves? Is
this a flaw in the coding? I'm just a user of spark. The developer that
set this up left the company, so I'm starting from the top here.
So I noticed if I spawn lots of jobs, my spark master ends up crashing due
to low
I have a 40GB ephemeral disk on /mnt and another one on /mnt2
The person that set this up has left. I'm aware of having maybe 1 ebs
disk, but I guess this was launched with 2 ebs volumes using the --ebsxyz?
Or are those two instance storages part of the AMI?
tnx
I'm very new to apache spark. I'm just a user not a developer.
I'm running a cluster with many spot instances. Am I correct in
understanding that spark can handle an unlimited number of spot instance
failures and restarts? Sometimes all the spot instances will dissapear
without warning, and then
Never mind. What I was missing was waiting long enough :-O.
Sry bout that.
On Thu, Mar 24, 2016 at 11:20 AM, Dillian Murphey <crackshotm...@gmail.com>
wrote:
> Had 15 slaves.
>
> Added 10 more.
>
> Shut down some slaves in the first bunch of 15.
>
> The 10 slaves I a
Had 15 slaves.
Added 10 more.
Shut down some slaves in the first bunch of 15.
The 10 slaves I added are sitting there idle. Spark did not assign idle
cores to pick up the slack.
What am I missing?
Thanks for any help. Confused. :-i
found so far seems to indicate it isn't supported yet.
But yet here I am with 1.5 and it at least appears to be working. Am I
missing something?
On Tue, Nov 24, 2015 at 4:40 PM, Dillian Murphey <crackshotm...@gmail.com>
wrote:
> What's the current status on adding slaves to a running cl
What's the current status on adding slaves to a running cluster? I want to
leverage spark-ec2 and autoscaling groups. I want to launch slaves as spot
instances when I need to do some heavy lifting, but I don't want to bring
down my cluster in order to add nodes.
Can this be done by just running