I know there are some community efforts shown in Spark summits before,
mostly around reusing the same Spark context with multiple “jobs”.
I don’t think reducing Spark job startup time is a community priority afaik.
Tim
On Fri, Jul 6, 2018 at 7:12 PM Tien Dat wrote:
> Dear Timothy,
>
> It works
Hi Tien,
There is no retry on the job level as we expect the user to retry, and as
you mention we tolerate tasks retry already.
There is no request/limit type resource configuration that you described in
Mesos (yet).
So for 2) that’s not possible at the moment.
Tim
On Fri, Jul 6, 2018 at
Got it, then you can have an extracted Spark directory on each host on the
same location, and don’t specify SPARK_EXECUTOR_URI. Instead, set
spark.mesos.executor.home to that directory.
This should effectively do what you want, which avoids extracting and
fetching and just executed the command.
If its available locally on each host, then don’t specify a remote url but
a local file uri instead.
We have a fetcher cache in Mesos a while ago, I believe there is
integration in the Spark framework if you look at the documentation as
well. With the fetcher cache enabled Mesos agent will cache
Interested to try as well.
Tim
On Tue, Jan 23, 2018 at 5:54 PM, Raj Adyanthaya wrote:
> Its very interesting and I do agree that it will get a lot of traction once
> made open source.
>
> On Mon, Jan 22, 2018 at 9:01 PM, Rohit Karlupia wrote:
>>
>> Hi,
>>
Hi Satya,
--jars doesn't work with local files in Mesos Cluster mode doesn't
upload or stage files automatically.
For now you need to put these files in a location that the Driver can access.
Tim
On Tue, May 16, 2017 at 10:17 PM, Satya Narayan1
wrote:
> creating
gt; I1230 14:30:12.473937 9572 master.cpp:5709] Sending 1 offers to framework
> 993198d1-7393-4656-9f75-4f22702609d0-0251 (eval.py) at
> scheduler-9300fd07-7cf5-4341-84c9-4f1930e8c145@172.16.1.101:40286
>
>
>
> On Fri, Dec 30, 2016 at 1:35 PM, Timothy Chen <tnac...@gmail.c
Hi Ji,
One way to make it fixed is to set LIBPROCESS_PORT environment variable on the
executor when it is launched.
Tim
> On Dec 30, 2016, at 1:23 PM, Ji Yan wrote:
>
> Dear Spark Users,
>
> We are trying to launch Spark on Mesos from within a docker container. We
> have
; wrote:
>>>>>
>>>>> That makes sense. From the documentation it looks like the executors
>>>>> are not supposed to terminate:
>>>>>
>>>>> http://spark.apache.org/docs/latest/running-on-mesos.html#fine-grained-deprecat
Hi Chawla,
One possible reason is that Mesos fine grain mode also takes up cores
to run the executor per host, so if you have 20 agents running Fine
grained executor it will take up 20 cores while it's still running.
Tim
On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit
Hi Jackie,
That doesn't work because GPU is a first class reource for Mesos
starting with 1.0, and the patch to enable it from me is still in PR.
I've done a demo last Spark Summit SF about Spark/Mesos/GPU and you
can look at the video to see how it works.
Feel free to try out the PR
Python should be supported as I tested it, patches should already be merged
1.6.2.
Tim
> On Sep 8, 2016, at 1:20 AM, Michael Gummelt wrote:
>
> Quite possibly. I've never used it. I know Python was "unsupported" for a
> while, which turned out to mean there was a
No you don't need to install spark on each slave, we have been running this
setup in Mesosphere without any problem at this point, I think most likely
configuration problem and perhaps a chance something is missing in the code to
handle some cases.
What version of spark are you guys running?
Will need more information to help you, what's the commands you used to launch
slave/master, and what error message did you see in the driver logs?
Tim
> On Mar 5, 2016, at 4:34 AM, Mailing List wrote:
>
> I am trying to do the same but till now no luck...
> I have
Can you go through the Mesos UI and look at the driver/executor log from steer
file and see what the problem is?
Tim
> On Mar 1, 2016, at 8:05 AM, Ashish Soni wrote:
>
> Not sure what is the issue but i am getting below error when i try to run
> spark PI example
>
>
Hi Renjie,
You can set number of cores per executor with spark executor cores in fine
grain mode.
If you want coarse grain mode to support that it will
Be supported in the near term as he coarse grain scheduler is getting revamped
now.
Tim
> On Nov 28, 2015, at 7:31 PM, Renjie Liu
Hi Jo,
Thanks for the links, I would expected the properties to be in
scheduler properties but I need to double check.
I'll be looking into these problems this week.
Tim
On Tue, Nov 17, 2015 at 10:28 AM, Jo Voordeckers
wrote:
> On Tue, Nov 17, 2015 at 5:16 AM, Iulian
Hi Remy,
Yes with docker bridge network it's not possible yet, with host network it
should work. I was planning to create a ticket and possibly work on that in the
future as there are some changes on the spark side.
Tim
> On Nov 4, 2015, at 8:24 AM, PHELIPOT, REMY
Fine grain mode does reuse the same JVM but perhaps different placement or
different allocated cores comparing to the same total memory allocation.
Tim
Sent from my iPhone
> On Nov 3, 2015, at 6:00 PM, Reynold Xin wrote:
>
> Soren,
>
> If I understand how Mesos works
Hi Klaus,
Sorry not next to a computer but it could possibily be a bug that it doesn't
take SPARK_HOME as the base path. Currently the spark image seems to set the
working directory so that it works.
I'll look at the code to verify but seems like it could be the case. If it's
true feel free
gt; When the executor starts, will it read any of the environment that it's
>>> executing in or will it just take only the properties given to it by the
>>> dispatcher and nothing more?
>>>
>>> Lemme know if anything needs more clarification and thanks for your
Hi Alan,
If I understand correctly, you are setting executor home when you launch the
dispatcher and not on the configuration when you submit job, and expect it to
inherit that configuration?
When I worked on the dispatcher I was assuming all configuration is passed to
the dispatcher to
Hi Bcjaes,
Sorry I didn't see the previous thread so not sure what issues you are running
into.
In cluster mode the driver logs and results are all available through the Mesos
UI, you need to look at terminated frameworks if it's a job that's already
finished.
I'll try to add more docs as we
Hi Dave,
I don't understand Keeberos much but if you know the exact steps that needs to
happen I can see how we can make that happen with the Spark framework.
Tim
On Jun 26, 2015, at 8:49 AM, Dave Ariens dari...@blackberry.com wrote:
I understand that Kerberos support for accessing
I left a comment on your stackoverflow earlier. Can you share what's the output
in your stderr log from your Mesos task? It
Can be found in your Mesos UI and going to its sandbox.
Tim
Sent from my iPhone
On Mar 29, 2015, at 12:14 PM, seglo wla...@gmail.com wrote:
The latter part of this
Hi John,
I think there are limitations with the way drivers are designed that is
required a seperate JVM process per driver, therefore it's not possible without
any code and design change AFAIK.
A driver shouldn't stay open past your job life time though, so while not
sharing between apps it
Hi Michael,
I see you capped the cores to 60.
I wonder what's the settings you used for standalone mode that you compared
with?
I can try to run a MLib workload on both to compare.
Tim
On Jan 9, 2015, at 6:42 AM, Michael V Le m...@us.ibm.com wrote:
Hi Tim,
Thanks for your response.
that you're changing the Mesos scheduler. Is there a Jira where
this job is taking place?
-kr, Gerard.
On Mon, Dec 22, 2014 at 6:01 PM, Timothy Chen tnac...@gmail.com wrote:
Hi Gerard,
Really nice guide!
I'm particularly interested in the Mesos scheduling side to more evenly
distribute cores
28 matches
Mail list logo