2) will be ideal but given the velocity of main branch, what Mesos
ended up doing was simply having a separate repo since it will take
too long to merge back to main.
We ended up running it pre-release (or major PR merged) and not on
every PR, I will also comment on asking users to run it.
We
-- Forwarded message --
From: Timothy Chen <tnac...@gmail.com>
Date: Thu, Aug 17, 2017 at 2:48 PM
Subject: Re: SPIP: Spark on Kubernetes
To: Marcelo Vanzin <van...@cloudera.com>
Hi Marcelo,
Agree with your points, and I had that same thought around Resource
st
+1 (non-binding)
Tim
On Tue, Aug 15, 2017 at 9:20 AM, Kimoon Kim wrote:
> +1 (non-binding)
>
> Thanks,
> Kimoon
>
> On Tue, Aug 15, 2017 at 9:19 AM, Sean Suchter
> wrote:
>>
>> +1 (non-binding)
>>
>>
>>
>> --
>> View this message in context:
munity.
>
> It sounds like the only thing keeping it from being enabled is a timeout
> config and someone volunteering to do some testing?
>
>
> On Mon, Apr 3, 2017 at 2:19 PM Timothy Chen <tnac...@gmail.com> wrote:
>>
>> The only reason is that MesosClusterS
The only reason is that MesosClusterScheduler by design is long
running so we really needed it to have failover configured correctly.
I wanted to create a JIRA ticket to allow users to configure it for
each Spark framework, but just didn't remember to do so.
Per another question that came up in
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
>
>
>
> From: Timothy Chen <tnac...@gmail.com>
> Sent: Friday, March 31, 2017 5:13 AM
> To: Yu Wei
> Cc: us...@
I think failover isn't enabled on regular Spark job framework, since we assume
jobs are more ephemeral.
It could be a good setting to add to the Spark framework to enable failover.
Tim
> On Mar 30, 2017, at 10:18 AM, Yu Wei wrote:
>
> Hi guys,
>
> I encountered a
; wrote:
>>>>>
>>>>> That makes sense. From the documentation it looks like the executors
>>>>> are not supposed to terminate:
>>>>>
>>>>> http://spark.apache.org/docs/latest/running-on-mesos.html#fine-grained-deprecat
Hi Chawla,
One possible reason is that Mesos fine grain mode also takes up cores
to run the executor per host, so if you have 20 agents running Fine
grained executor it will take up 20 cores while it's still running.
Tim
On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit
Congrats Felix!
Tim
On Mon, Aug 8, 2016 at 11:15 AM, Matei Zaharia wrote:
> Hi all,
>
> The PMC recently voted to add Felix Cheung as a committer. Felix has been a
> major contributor to SparkR and we're excited to have him join officially.
> Congrats and welcome,
Hi,
How did you package the spark.tgz, and are you running the same code that you
packaged when you ran spark submit?
And what is your settings for spark look like?
Tim
> On Jun 6, 2016, at 12:13 PM, thibaut wrote:
>
> Hi there,
>
> I an trying to configure
This will also simplify Mesos users as well, DCOS has to work around
this with our own proxying.
Tim
On Sun, May 22, 2016 at 11:53 PM, Gurvinder Singh
wrote:
> Hi Reynold,
>
> So if that's OK with you, can I go ahead and create JIRA for this. As it
> seems this
I think it's just not implemented, +1 for adding it.
Tim
> On May 10, 2016, at 5:52 PM, Michael Gummelt wrote:
>
> Client mode doesn't seem to support remote JAR downloading, as reported here:
> https://issues.apache.org/jira/browse/SPARK-10643
>
> The docs here:
>
Yes if want to manually override what IP to use to be contacted by the master
you can set LIPROCESS_IP and LIBPROCESS_PORT.
It is a Mesos specific settings. We can definitely update the docs.
Note that in the future as we move to use the new Mesos Http API these
configurations won't be needed
Hi Adam,
Thanks for the graphs and the tests, definitely interested to dig a
bit deeper to find out what's could be the cause of this.
Do you have the spark driver logs for both runs?
Tim
On Mon, Nov 30, 2015 at 9:06 AM, Adam McElwee wrote:
> To eliminate any skepticism
Hi Jo,
Thanks for the links, I would expected the properties to be in
scheduler properties but I need to double check.
I'll be looking into these problems this week.
Tim
On Tue, Nov 17, 2015 at 10:28 AM, Jo Voordeckers
wrote:
> On Tue, Nov 17, 2015 at 5:16 AM, Iulian
Fine grain mode does reuse the same JVM but perhaps different placement or
different allocated cores comparing to the same total memory allocation.
Tim
Sent from my iPhone
> On Nov 3, 2015, at 6:00 PM, Reynold Xin wrote:
>
> Soren,
>
> If I understand how Mesos works
I would also like to see data shared off-heap to a 3rd party C++
library with JNI, I think the complications would be how to memory
manage this and make sure the 3rd party libraries also adhere to the
access contracts as well.
Tim
On Sat, Aug 29, 2015 at 12:17 PM, Paul Weiss
Hi Nik,
Bharath is mostly referring to Spark commiters in this thread.
Tim
On Tue, Jun 9, 2015 at 9:51 PM, Niklas Nielsen nik...@mesosphere.io wrote:
Hi Bharath (and rest of Spark dev list!),
Just a small shout out: I am a Apache Mesos Committer and would love to help
out with anything you
So, to confirm - in this mode, when a Spark application/context runs a
series of tasks, each task will launch a full SparkExecutor process?
What is the cpu/mem cost of such Spark Executor process (resource
sizing passed in the Mesos task launch request)?
related to your work on the rencently merged Spark
Cluster Mode for Mesos.
Can you elaborate how it works compared to the Standalone mode.
and do you maintain the dyanamic allocation of mesos resources in the
cluster mode unlike the coarse grained mode?
On Tue, May 5, 2015 at 9:54 PM, Timothy Chen
Hi Gidon,
1. Yes, each Spark application is wrapped in a new Mesos framework.
2. In fine grained mode, what happens is that Spark scheduler
specifies a custom Mesos executor per slave, and each Mesos task is a
Spark executor that will be launched by the Mesos executor. It's hard
to determine
+1 Tested on 4 nodes Mesos cluster with fine-grain and coarse-grain mode.
Tim
On Wed, Apr 8, 2015 at 9:32 AM, Denny Lee denny.g@gmail.com wrote:
The RC2 bits are lacking Hadoop 2.4 and Hadoop 2.6 - was that intended
(they were included in RC1)?
On Wed, Apr 8, 2015 at 9:01 AM Tom Graves
+1 (non-binding)
Tested Mesos coarse/fine-grained mode with 4 nodes Mesos cluster with
simple shuffle/map task.
Will be testing with more complete suite (ie: spark-perf) once the
infrastructure is setup to do so.
Tim
On Thu, Feb 19, 2015 at 12:50 PM, Krishna Sankar ksanka...@gmail.com wrote:
Congrats all!
Tim
On Feb 4, 2015, at 7:10 AM, Pritish Nawlakhe
prit...@nirvana-international.com wrote:
Congrats and welcome back!!
Thank you!!
Regards
Pritish
Nirvana International Inc.
Big Data, Hadoop, Oracle EBS and IT Solutions
VA - SWaM, MD - MBE Certified Company
What error are you getting?
Tim
Sent from my iPhone
On Dec 24, 2014, at 8:59 PM, Naveen Madhire vmadh...@umail.iu.edu wrote:
Hi All,
I am starting to use Spark. I am having trouble getting the latest code
from git.
I am using Intellij as suggested in the below link,
that you're changing the Mesos scheduler. Is there a Jira where
this job is taking place?
-kr, Gerard.
On Mon, Dec 22, 2014 at 6:01 PM, Timothy Chen tnac...@gmail.com wrote:
Hi Gerard,
Really nice guide!
I'm particularly interested in the Mesos scheduling side to more evenly
distribute cores
Hi Gerard,
Really nice guide!
I'm particularly interested in the Mesos scheduling side to more evenly
distribute cores across cluster.
I wonder if you are using coarse grain mode or fine grain mode?
I'm making changes to the spark mesos scheduler and I think we can propose a
best way to
Hi Matei,
Definitely in favor of moving into this model for exactly the reasons
you mentioned.
From the module list though, the module that I'm mostly involved with
and is not listed is the Mesos integration piece.
I believe we also need a maintainer for Mesos, and I wonder if there
is someone
Hi Gurvinder,
I tried fine grain mode before and didn't get into that problem.
On Sun, Oct 5, 2014 at 11:44 PM, Gurvinder Singh
gurvinder.si...@uninett.no wrote:
On 10/06/2014 08:19 AM, Fairiz Azizi wrote:
The Spark online docs indicate that Spark is compatible with Mesos 0.18.1
I've gotten
(Hit enter too soon...)
What is your setup and steps to repro this?
Tim
On Mon, Oct 6, 2014 at 12:30 AM, Timothy Chen tnac...@gmail.com wrote:
Hi Gurvinder,
I tried fine grain mode before and didn't get into that problem.
On Sun, Oct 5, 2014 at 11:44 PM, Gurvinder Singh
gurvinder.si
, lets just quit :-)
- Gurvinder
On 10/06/2014 09:30 AM, Timothy Chen wrote:
(Hit enter too soon...)
What is your setup and steps to repro this?
Tim
On Mon, Oct 6, 2014 at 12:30 AM, Timothy Chen tnac...@gmail.com wrote:
Hi Gurvinder,
I tried fine grain mode before and didn't
+1 Make-distrubtion works, and also tested simple spark jobs on Spark
on Mesos on 8 node Mesos cluster.
Tim
On Thu, Aug 28, 2014 at 8:53 PM, Burak Yavuz bya...@stanford.edu wrote:
+1. Tested MLlib algorithms on Amazon EC2, algorithms show speed-ups between
1.5-5x compared to the 1.0.2
On August 25, 2014 at 5:05:56 AM, Gary Malouf (malouf.g...@gmail.com)
wrote:
We have not tried the work-around because there are other bugs in there
that affected our set-up, though it seems it would help.
On Mon, Aug 25, 2014 at 12:54 AM, Timothy Chen tnac...@gmail.com wrote:
+1 to have
it does, it would be good to explain why it
behaves like that.
Matei
On August 25, 2014 at 2:28:18 PM, Timothy Chen (tnac...@gmail.com) wrote:
Hi Matei,
I'm going to investigate from both Mesos and Spark side will hopefully
have a good long term solution. In the mean time having a work around
+1 to have the work around in.
I'll be investigating from the Mesos side too.
Tim
On Sun, Aug 24, 2014 at 9:52 PM, Matei Zaharia matei.zaha...@gmail.com wrote:
Yeah, Mesos in coarse-grained mode probably wouldn't work here. It's too bad
that this happens in fine-grained mode -- would be
36 matches
Mail list logo