David, Tom,
Thanks for the explanation. This confirms my suspicion that the executor
was holding on to memory regardless of tasks in execution once it expands
to occupy memory in keeping with spark.executor.memory. There certainly is
scope for improvement here, though I realize there will
To be precise, the MesosExecutorBackend's Xms & Xmx equal
spark.executor.memory. So there's no question of expanding or contracting
the memory held by the executor.
On Sat, Oct 17, 2015 at 5:38 PM, Bharath Ravi Kumar
wrote:
> David, Tom,
>
> Thanks for the explanation. This
Can someone respond if you're aware of the reason for such a memory
footprint? It seems unintuitive and hard to reason about.
Thanks,
Bharath
On Thu, Oct 15, 2015 at 12:29 PM, Bharath Ravi Kumar
wrote:
> Resending since user@mesos bounced earlier. My apologies.
>
> On Thu,
Resending since user@mesos bounced earlier. My apologies.
On Thu, Oct 15, 2015 at 12:19 PM, Bharath Ravi Kumar
wrote:
> (Reviving this thread since I ran into similar issues...)
>
> I'm running two spark jobs (in mesos fine grained mode), each belonging to
> a different
(Reviving this thread since I ran into similar issues...)
I'm running two spark jobs (in mesos fine grained mode), each belonging to
a different mesos role, say low and high. The low:high mesos weights are
1:10. On expected lines, I see that the low priority job occupies cluster
resources to the
(Adding spark user list)
Hi Tom,
If I understand correctly you're saying that you're running into memory
problems because the scheduler is allocating too much CPUs and not enough
memory to acoomodate them right?
In the case of fine grain mode I don't think that's a problem since we have
a fixed