sing that dispatcher ?
> --
> *From:* Jerry Lam [chiling...@gmail.com]
> *Sent:* Monday, July 20, 2015 8:27 AM
> *To:* Jahagirdar, Madhu
> *Cc:* user; dev@spark.apache.org
> *Subject:* Re: Spark Mesos Dispatcher
>
> Yes.
>
> Sent from my iPhone
>
> On 19 Jul,
Yes.
Sent from my iPhone
> On 19 Jul, 2015, at 10:52 pm, "Jahagirdar, Madhu"
> wrote:
>
> All,
>
> Can we run different version of Spark using the same Mesos Dispatcher. For
> example we can run drivers with Spark 1.3 and Spark 1.4 at the same time ?
>
> Regards,
> Madhu Jahagirdar
>
> Th
s distributing these tasks to these long running Spark
>> executors.
>>
>> Mesos Resources becomes more static in coarse grained mode as it will
>> just launch a number of these CoarseGrainedExecutorBackends and keep
>> them running until the driver stops. Note this is subj
for a release (or patch landing) of the
dynamic allocation in Spark/Mesos?
Dynamic allocation for coarse grained mode I'm hopin
more static in coarse grained mode as it will
> just launch a number of these CoarseGrainedExecutorBackends and keep
> them running until the driver stops. Note this is subject to change
> with dynamic allocation and other Spark/Mesos patches going into
> Spark.
>
> Tim
>
> On
it will
> just launch a number of these CoarseGrainedExecutorBackends and keep
> them running until the driver stops. Note this is subject to change
> with dynamic allocation and other Spark/Mesos patches going into
> Spark.
What is the current target for a release (or patch landi
subject to change
with dynamic allocation and other Spark/Mesos patches going into
Spark.
Tim
On Tue, May 5, 2015 at 6:19 AM, Gidon Gershinsky wrote:
> Hi all,
>
> I have a few questions on how Spark is integrated with Mesos - any
> details, or pointers to a design document / relevant s
Speaking as a user of spark on mesos
Yes it appears that each app appears as a separate framework on the mesos
master
In fine grained mode the number of executors goes up and down vs fixed in
coarse.
I would not run fine grained mode on a large cluster as it can potentially
spin up a lot of execu
Hi all,
I have a few questions on how Spark is integrated with Mesos - any
details, or pointers to a design document / relevant source, will be much
appreciated.
I'm aware of this description,
https://github.com/apache/spark/blob/master/docs/running-on-mesos.md
But its pretty high-level as
Hi Tobias,
Regarding my comment on closure serialization:
I was discussing it with my fellow Sparkers here and I totally overlooked
the fact that you need the class files to de-serialize the closures (or
whatever) on the workers, so you always need the jar file delivered to the
workers in order f
Hi Tobias,
I was curious about this issue and tried to run your example on my local
Mesos. I was able to reproduce your issue using your current config:
[error] (run-main-0) org.apache.spark.SparkException: Job aborted: Task
1.0:4 failed 4 times (most recent failure: Exception failure:
java.lang.
11 matches
Mail list logo