Spark on Mesos broken on 2.4 ?

2019-03-18 Thread Jorge Machado
Hello Everyone, I’m just trying out the spark-shell on mesos and I don’t get any executors. To debug it I started the vagrant box from aurora and try it out there and I can the same issue as I’m getting on my cluster. On Mesos the only active framework is the spark-shel, it is running 1.6.1

Using spark and mesos container with host_path volume

2018-12-03 Thread Antoine DUBOIS
Hello, I'm trying to mount a local ceph volume to my mesos container. My cephfs is mounted on all agent at /ceph I'm using spark 2.4 with hadoop 3.11 and I'm not using Docker to deploy spark. The only option I could find to mount a volume though is the following (which is also a line I added

Re: Spark on Mesos - Weird behavior

2018-07-23 Thread Thodoris Zois
gt;>>> Is there any way to specify also the lower bound? It is a bit annoying >>>>> that seems that we can’t control the resource usage of an application. By >>>>> the way, we are not using dynamic allocation. >>>>> >>>>> - Thodoris >

Re: Spark on Mesos: Spark issuing hundreds of SUBSCRIBE requests / second and crashing Mesos

2018-07-23 Thread Nimi W
; scheduler client can result in infinite subscribe loop" ( > https://issues.apache.org/jira/browse/MESOS-8171). It can be fixed by > upgrading to a version of libmesos that has the fix. > > Susan > > > On Fri, Jul 13, 2018 at 3:39 PM, Nimi W wrote: > >> I've come acro

Re: Spark on Mesos: Spark issuing hundreds of SUBSCRIBE requests / second and crashing Mesos

2018-07-23 Thread Susan X. Huynh
PM, Nimi W wrote: > I've come across an issue with Mesos 1.4.1 and Spark 2.2.1. We launch > Spark tasks using the MesosClusterDispatcher in cluster mode. On a couple > of occasions, we have noticed that when the Spark Driver crashes (to > various causes - human error, network e

Re: Spark on Mesos - Weird behavior

2018-07-23 Thread Susan X. Huynh
e you checked that mesos send offers to spark app mesos >>> framework at least with 10 cores and 2GB RAM? >>> >>> If mesos have not available offers with 10 cores, for example, but have >>> with 8 or 9, so you can use smaller executers for better fit for available >

Spark on Mesos: Spark issuing hundreds of SUBSCRIBE requests / second and crashing Mesos

2018-07-13 Thread Nimi W
I've come across an issue with Mesos 1.4.1 and Spark 2.2.1. We launch Spark tasks using the MesosClusterDispatcher in cluster mode. On a couple of occasions, we have noticed that when the Spark Driver crashes (to various causes - human error, network error), sometimes, when the Driver is restarted

Re: Spark on Mesos - Weird behavior

2018-07-11 Thread Pavel Plotnikov
ocation. >> >> - Thodoris >> >> >> On 10 Jul 2018, at 14:35, Pavel Plotnikov >> wrote: >> >> Hello Thodoris! >> Have you checked this: >> - does mesos cluster have available resources? >> - if spark have waiting tasks in queue more

Re: Spark on Mesos - Weird behavior

2018-07-11 Thread Thodoris Zois
gt; - does mesos cluster have available resources? >> - if spark have waiting tasks in queue more than >> spark.dynamicAllocation.schedulerBacklogTimeout configuration value? >> - And then, have you checked that mesos send offers to spark app mesos >> framework at least

Re: Spark on Mesos - Weird behavior

2018-07-11 Thread Pavel Plotnikov
ve available resources? > - if spark have waiting tasks in queue more than > spark.dynamicAllocation.schedulerBacklogTimeout configuration value? > - And then, have you checked that mesos send offers to spark app mesos > framework at least with 10 cores and 2GB RAM? > > If mesos ha

Re: [SPARK on MESOS] Avoid re-fetching Spark binary

2018-07-11 Thread Tien Dat
Thanks for your suggestion. I have been checking Spark-jobserver. Just a off-topic question about this project: Does Apache Spark project have any support/connection to this Spark-jobserver project? I noticed that they do not have release for the newest version of Spark (e.g., 2.3.1). As you

Re: [SPARK on MESOS] Avoid re-fetching Spark binary

2018-07-10 Thread Mark Hamstra
ency? > >> > >> Best > >> > >> > >> > >> -- > >> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/ > >> > >> - > >> To unsubscribe e-mail: user-unsubscr...@spark.apache.org > >> > >> > > > Quoted from: > > http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-on-MESOS-Avoid-re-fetching-Spark-binary-tp32849p32865.html > > > _ > Sent from http://apache-spark-user-list.1001560.n3.nabble.com > >

Re: Spark on Mesos - Weird behavior

2018-07-10 Thread Thodoris Zois
aiting tasks in queue more than > spark.dynamicAllocation.schedulerBacklogTimeout configuration value? > - And then, have you checked that mesos send offers to spark app mesos > framework at least with 10 cores and 2GB RAM? > > If mesos have not available offers with 10 cores,

Re: Spark on Mesos - Weird behavior

2018-07-10 Thread Pavel Plotnikov
Hello Thodoris! Have you checked this: - does mesos cluster have available resources? - if spark have waiting tasks in queue more than spark.dynamicAllocation.schedulerBacklogTimeout configuration value? - And then, have you checked that mesos send offers to spark app mesos framework at least

Spark on Mesos - Weird behavior

2018-07-09 Thread Thodoris Zois
Hello list, We are running Apache Spark on a Mesos cluster and we face a weird behavior of executors. When we submit an app with e.g 10 cores and 2GB of memory and max cores 30, we expect to see 3 executors running on the cluster. However, sometimes there are only 2... Spark applications

Re: [SPARK on MESOS] Avoid re-fetching Spark binary

2018-07-07 Thread Mark Hamstra
Essentially correct. The latency to start a Spark Job is nowhere close to 2-4 seconds under typical conditions. Creating a new Spark Application every time instead of running multiple Jobs in one Application is not going to lead to acceptable interactive or real-time performance, nor is that an

Re: [SPARK on MESOS] Avoid re-fetching Spark binary

2018-07-06 Thread Mark Hamstra
The latency to start a Spark Job is nowhere close to 2-4 seconds under typical conditions. You appear to be creating a new Spark Application everytime instead of running multiple Jobs in one Application. On Fri, Jul 6, 2018 at 3:12 AM Tien Dat wrote: > Dear Timothy, > > It works like a charm

Re: [SPARK on MESOS] Avoid re-fetching Spark binary

2018-07-06 Thread Timothy Chen
I know there are some community efforts shown in Spark summits before, mostly around reusing the same Spark context with multiple “jobs”. I don’t think reducing Spark job startup time is a community priority afaik. Tim On Fri, Jul 6, 2018 at 7:12 PM Tien Dat wrote: > Dear Timothy, > > It works

Re: [SPARK on MESOS] Avoid re-fetching Spark binary

2018-07-06 Thread Tien Dat
Dear Timothy, It works like a charm now. BTW (don't judge me if I am to greedy :-)), the latency to start a Spark job is around 2-4 seconds, unless I am not aware of some awesome optimization on Spark. Do you know if Spark community is working on reducing this latency? Best -- Sent from:

Re: [SPARK on MESOS] Avoid re-fetching Spark binary

2018-07-06 Thread Timothy Chen
runs, the disk usage increases gradually. > > Therefore, our expectation is to have Spark running on Mesos without this > binary extraction, as well as without storing the same binary every time > new > Spark job runs. > > Does that make sense to you? And do

Re: [SPARK on MESOS] Avoid re-fetching Spark binary

2018-07-06 Thread Tien Dat
is not much since the file is stored at local any way. The process that takes more time is the extraction. Finally, since Mesos make a new folder for extracting the Spark binary each time a new Spark job runs, the disk usage increases gradually. Therefore, our expectation is to have Spark running

Re: [SPARK on MESOS] Avoid re-fetching Spark binary

2018-07-06 Thread Timothy Chen
the same remote binary as well. Tim On Fri, Jul 6, 2018 at 5:00 PM Tien Dat wrote: > Dear all, > > We are running Spark with Mesos as the master for resource management. > In our cluster, there are jobs that require very short response time (near > real time applications), which u

[SPARK on MESOS] Avoid re-fetching Spark binary

2018-07-06 Thread Tien Dat
Dear all, We are running Spark with Mesos as the master for resource management. In our cluster, there are jobs that require very short response time (near real time applications), which usually around 3-5 seconds. In order to Spark to execute with Mesos, one has to specify

[Spark streaming-Mesos-cluster mode] java.lang.RuntimeException: Stream jar not found

2017-07-26 Thread RCinna
. How to fix this issue (if possible ?) * Versions : Mesos 1.2.0 spark 2.0.1 hdfs 2.7 More information, see stackoverflow issue here <https://stackoverflow.com/questions/44703631/spark-streaming-cluster-mode-in-mesos-java-lang-runtimeexception-stream-jar-n> . Thanks, RCinna -

Spark on Mesos failure, when launching a simple job

2017-05-22 Thread ved_kpl
I have been trying to learn spark on mesos, but the spark-shell just keeps on ignoring the offers. Here is my setup: All the components are in the same subnet - 1 mesos master on EC2 instance (t2.micro) command: `mesos-master --work_dir=/tmp/abc --hostname=` - 2 mesos agents (each with 4

Re: Spark diclines mesos offers

2017-04-26 Thread Pavel Plotnikov
a/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L316 > > On Mon, Apr 24, 2017 at 4:53 AM, Pavel Plotnikov < > pavel.plotni...@team.wrike.com> wrote: > >> Hi, everyone! I run spark 2.1.0 jobs on the top of Mesos cluster in >>

Re: Spark diclines mesos offers

2017-04-24 Thread Michael Gummelt
Have you run with debug logging? There are some hints in the debug logs: https://github.com/apache/spark/blob/branch-2.1/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L316 On Mon, Apr 24, 2017 at 4:53 AM, Pavel Plotnikov < pavel.plo

Spark diclines mesos offers

2017-04-24 Thread Pavel Plotnikov
Hi, everyone! I run spark 2.1.0 jobs on the top of Mesos cluster in coarse-grained mode with dynamic resource allocation. And sometimes spark mesos scheduler declines mesos offers despite the fact that not all available resources were used (I have less workers than the possible maximum

Re: Spark on Mesos with Docker in bridge networking mode

2017-02-17 Thread Michael Gummelt
netty.util.concurrent.SingleThreadEventExecutor.runAllTasks( > SingleThreadEventExecutor.java:357) > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) > at > io.netty.util.concurrent.SingleThreadEventExecutor$2. > run(SingleThreadEventExecutor.java:111) > at java.lan

Spark on Mesos with Docker in bridge networking mode

2017-02-16 Thread cherryii
ang.Thread.run(Thread.java:745) I was trying to follow instructions here: https://github.com/apache/spark/pull/15120 So in my Marathon json I'm defining the ports to use for the spark driver, spark ui and block manager. Can anyone help me get this running in bridge networking mode? -- View

Re: Dynamic resource allocation to Spark on Mesos

2017-02-09 Thread Michael Gummelt
;mgumm...@mesosphere.io> wrote: > > Sun, are you using marathon to run the shuffle service? > > On Tue, Feb 7, 2017 at 7:36 PM, Sun Rui <sunrise_...@163.com> wrote: > >> Yi Jan, >> >> We have been using Spark on Mesos with dynamic allocation enabled, whi

Re: Dynamic resource allocation to Spark on Mesos

2017-02-08 Thread Sun Rui
marathon to run the shuffle service? > > On Tue, Feb 7, 2017 at 7:36 PM, Sun Rui <sunrise_...@163.com > <mailto:sunrise_...@163.com>> wrote: > Yi Jan, > > We have been using Spark on Mesos with dynamic allocation enabled, which > works and improves the overall c

Re: Dynamic resource allocation to Spark on Mesos

2017-02-08 Thread Michael Gummelt
Sun, are you using marathon to run the shuffle service? On Tue, Feb 7, 2017 at 7:36 PM, Sun Rui <sunrise_...@163.com> wrote: > Yi Jan, > > We have been using Spark on Mesos with dynamic allocation enabled, which > works and improves the overall cluster utilization. > >

Re: Dynamic resource allocation to Spark on Mesos

2017-02-07 Thread Sun Rui
Yi Jan, We have been using Spark on Mesos with dynamic allocation enabled, which works and improves the overall cluster utilization. In terms of job, do you mean jobs inside a Spark application or jobs among different applications? Maybe you can read http://spark.apache.org/docs/latest/job

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread ji yan
le up to spark.cores.max or however >> many cpu cores available on the cluster, and this may be undesirable >> because the number of executors in rdd.parallelize(collection, # of >> partitions) is being overriden >> >> On Thu, Feb 2, 2017 at 1:30 PM, Michael Gummelt <

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread Michael Gummelt
s) is being overriden > > On Thu, Feb 2, 2017 at 1:30 PM, Michael Gummelt <mgumm...@mesosphere.io> > wrote: > >> As of Spark 2.0, Mesos mode does support setting cores on the executor >> level, but you might need to set the property directly (--conf >> spark.

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread Ji Yan
) is being overriden On Thu, Feb 2, 2017 at 1:30 PM, Michael Gummelt <mgumm...@mesosphere.io> wrote: > As of Spark 2.0, Mesos mode does support setting cores on the executor > level, but you might need to set the property directly (--conf > spark.executor.cores=). I've written

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread Michael Gummelt
As of Spark 2.0, Mesos mode does support setting cores on the executor level, but you might need to set the property directly (--conf spark.executor.cores=). I've written about this here: https://docs.mesosphere.com/1.8/usage/service-guides/spark/job-scheduling/. That doc is for DC/OS

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread Ji Yan
-cores) >>> >>> On Thu, Feb 2, 2017 at 12:41 PM, Ji Yan <ji...@drive.ai> wrote: >>> >>>> I have done a experiment on this today. It shows that only CPUs are >>>> tolerant of insufficient cluster size when a job starts. On my cluster, I >>

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread Michael Gummelt
total-executor-cores, and --executor-cores) >> >> On Thu, Feb 2, 2017 at 12:41 PM, Ji Yan <ji...@drive.ai> wrote: >> >>> I have done a experiment on this today. It shows that only CPUs are >>> tolerant of insufficient cluster size when a job starts. On my

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread Ji Yan
have 180Gb of memory and 64 cores, when I run spark-submit ( on mesos ) >> with --cpu_cores set to 1000, the job starts up with 64 cores. but when I >> set --memory to 200Gb, the job fails to start with "Initial job has not >> accepted any resources; check your cluster UI to

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread Michael Gummelt
gt; tolerant of insufficient cluster size when a job starts. On my cluster, I > have 180Gb of memory and 64 cores, when I run spark-submit ( on mesos ) > with --cpu_cores set to 1000, the job starts up with 64 cores. but when I > set --memory to 200Gb, the job fails to start with "Initial

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread Ji Yan
I have done a experiment on this today. It shows that only CPUs are tolerant of insufficient cluster size when a job starts. On my cluster, I have 180Gb of memory and 64 cores, when I run spark-submit ( on mesos ) with --cpu_cores set to 1000, the job starts up with 64 cores. but when I set

Re: Dynamic resource allocation to Spark on Mesos

2017-01-30 Thread Michael Gummelt
On Mon, Jan 30, 2017 at 9:47 AM, Ji Yan <ji...@drive.ai> wrote: > Tasks begin scheduling as soon as the first executor comes up > > > Thanks all for the clarification. Is this the default behavior of Spark on > Mesos today? I think this is what we are looking for because

Re: Dynamic resource allocation to Spark on Mesos

2017-01-30 Thread Ji Yan
> > Tasks begin scheduling as soon as the first executor comes up Thanks all for the clarification. Is this the default behavior of Spark on Mesos today? I think this is what we are looking for because sometimes a job can take up lots of resources and later jobs could not get all the res

Re: Dynamic resource allocation to Spark on Mesos

2017-01-28 Thread Michael Gummelt
"Launch each executor with at least 1GB RAM, > but if mesos offers 2GB at some moment, then launch an executor with 2GB > RAM". > > I wonder what's benefit of that? To reduce the "resource fragmentation"? > > Anyway, that is not supported at this moment. In all

Re: Dynamic resource allocation to Spark on Mesos

2017-01-28 Thread Shuai Lin
each executor with at least 1GB RAM, but if mesos offers 2GB at some moment, then launch an executor with 2GB RAM". I wonder what's benefit of that? To reduce the "resource fragmentation"? Anyway, that is not supported at this moment. In all the supported cluster managers of s

Re: Dynamic resource allocation to Spark on Mesos

2017-01-27 Thread Mihai Iacob
rather than a single static amount set up front. Dynamic Allocation is supported in Spark on Mesos, but we here at Mesosphere haven't been testing it much, and I'm not sure what the community adoption is.  So I can't yet speak to its robustness, but we will be investing in it soon.  Many users want

Re: Dynamic resource allocation to Spark on Mesos

2017-01-27 Thread Michael Gummelt
nly allocating as many executors as a job needs, rather than a single static amount set up front. Dynamic Allocation is supported in Spark on Mesos, but we here at Mesosphere haven't been testing it much, and I'm not sure what the community adoption is. So I can't yet speak to its robustness, but

Dynamic resource allocation to Spark on Mesos

2017-01-27 Thread Ji Yan
Dear Spark Users, Currently is there a way to dynamically allocate resources to Spark on Mesos? Within Spark we can specify the CPU cores, memory before running job. The way I understand is that the Spark job will not run if the CPU/Mem requirement is not met. This may lead to decrease in overall

Re: launch spark on mesos within a docker container

2016-12-30 Thread Timothy Chen
It seems like it's getting offer decline calls, which seems like it's getting the offer calls and was able to reply. Can you turn on TRACE logging in Spark with the Mesos coarse grain scheduler and see if it says if it is processing the offers? Tim On Fri, Dec 30, 2016 at 2:35 PM, Ji Yan <

Re: launch spark on mesos within a docker container

2016-12-30 Thread Ji Yan
hen it is launched. > > Tim > > > On Dec 30, 2016, at 1:23 PM, Ji Yan <ji...@drive.ai> wrote: > > Dear Spark Users, > > We are trying to launch Spark on Mesos from within a docker container. We > have found that since the Spark executors need to talk back at

Re: launch spark on mesos within a docker container

2016-12-30 Thread Timothy Chen
Hi Ji, One way to make it fixed is to set LIBPROCESS_PORT environment variable on the executor when it is launched. Tim > On Dec 30, 2016, at 1:23 PM, Ji Yan <ji...@drive.ai> wrote: > > Dear Spark Users, > > We are trying to launch Spark on Mesos from within a docker

launch spark on mesos within a docker container

2016-12-30 Thread Ji Yan
Dear Spark Users, We are trying to launch Spark on Mesos from within a docker container. We have found that since the Spark executors need to talk back at the Spark driver, there is need to do a lot of port mapping to make that happen. We seemed to have mapped the ports on what we could find from

Re: two spark-shells spark on mesos not working

2016-11-22 Thread Michael Gummelt
those logs here as well. On Tue, Nov 22, 2016 at 4:52 AM, John Yost <hokiege...@gmail.com> wrote: > Hi Everyone, > > There is probably an obvious answer to this, but not sure what it is. :) > > I am attempting to launch 2..n spark shells using Mesos as the master >

two spark-shells spark on mesos not working

2016-11-22 Thread John Yost
Hi Everyone, There is probably an obvious answer to this, but not sure what it is. :) I am attempting to launch 2..n spark shells using Mesos as the master (this is to support 1..n researchers running pyspark stuff on our data). I can launch two or more spark shells without any problem

Re: Two questions about running spark on mesos

2016-11-14 Thread Michael Gummelt
u20...@hotmail.com> wrote: > Hi Guys, > > > Two questions about running spark on mesos. > > 1, Does spark configuration of conf/slaves still work when running spark > on mesos? > > According to my observations, it seemed that conf/slaves still took > effect when running spar

Two questions about running spark on mesos

2016-11-14 Thread Yu Wei
Hi Guys, Two questions about running spark on mesos. 1, Does spark configuration of conf/slaves still work when running spark on mesos? According to my observations, it seemed that conf/slaves still took effect when running spark-shell. However, it doesn't take effect when deploying

Re: spark on mesos memory sizing with offheap

2016-10-13 Thread Michael Gummelt
It doesn't look like we are. Can you file a JIRA? A workaround is to set spark.mesos.executor.overhead to be at least spark.memory.offheap.size. This is how the container is sized: https://github.com/apache/spark/blob/master/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos

spark on mesos memory sizing with offheap

2016-10-13 Thread vincent gromakowski
Hi, I am trying to understand how mesos allocate memory when offheap is enabled but it seems that the framework is only taking the heap + 400 MB overhead into consideration for resources allocation. Example: spark.executor.memory=3g spark.memory.offheap.size=1g ==> mesos report 3.4g allocated for

Re: Spark on mesos in docker not getting parameters

2016-08-09 Thread Michael Gummelt
rom SPARK_EXECUTOR_OPTS, then I would see something wrong. On Tue, Aug 9, 2016 at 10:13 AM, Jim Carroll <jimfcarr...@gmail.com> wrote: > I'm running spark 2.0.0 on Mesos using spark.mesos.executor.docker.image > to > point to a docker container that I built with the Spark installati

Spark on mesos in docker not getting parameters

2016-08-09 Thread Jim Carroll
I'm running spark 2.0.0 on Mesos using spark.mesos.executor.docker.image to point to a docker container that I built with the Spark installation. Everything is working except the Spark client process that's started inside the container doesn't get any of my parameters I set in the spark config

spark on mesos cluster - metrics with graphite sink

2016-06-09 Thread Lior Chaga
Hi, I'm launching spark application on mesos cluster. The namespace of the metric includes the framework id for driver metrics, and both framework id and executor id for executor metrics. These ids are obviously assigned by mesos, and they are not permanent - re-registering the application would

Questions about Spark On Mesos

2016-03-15 Thread Shuai Lin
. But there is one import thing before making the decision: data locality. If we run spark on mesos, can it achieve good data locality when processing HDFS data? I think spark on yarn can achieve that out of the box, but not sure whether spark on mesos could do that. I've searched through the archive

Re: Spark on Mesos with Centos 6.6 NFS

2015-12-01 Thread Akhil Das
$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > > The remoteMachineHost that throws the error has write access to the > specific > folder. > Any thoughts? > > > &g

--jars option not working for spark on Mesos in cluster mode

2015-10-21 Thread Virag Kothari
Hi, I am trying to run a spark job on mesos in cluster mode using the following command ./bin/spark-submit --deploy-mode cluster --master mesos://172.17.0.1:7077 —-jars http://172.17.0.2:18630/mesos/extraJars.jar --class MyClass http://172.17.0.2:18630/mesos/foo.jar The application jar

Re: Spark on Mesos / Executor Memory

2015-10-17 Thread Bharath Ravi Kumar
esos bounced earlier. My apologies. >> >> On Thu, Oct 15, 2015 at 12:19 PM, Bharath Ravi Kumar <reachb...@gmail.com >> > wrote: >> >>> (Reviving this thread since I ran into similar issues...) >>> >>> I'm running two spark jobs (in mesos fine grai

Re: Spark on Mesos / Executor Memory

2015-10-17 Thread Bharath Ravi Kumar
s, >> Bharath >> >> On Thu, Oct 15, 2015 at 12:29 PM, Bharath Ravi Kumar <reachb...@gmail.com >> > wrote: >> >>> Resending since user@mesos bounced earlier. My apologies. >>> >>> On Thu, Oct 15, 2015 at 12:19 PM, Bharath Ravi Kumar < >>&

Re: Spark on Mesos / Executor Memory

2015-10-16 Thread Bharath Ravi Kumar
gt; > On Thu, Oct 15, 2015 at 12:19 PM, Bharath Ravi Kumar <reachb...@gmail.com> > wrote: > >> (Reviving this thread since I ran into similar issues...) >> >> I'm running two spark jobs (in mesos fine grained mode), each belonging >> to a different mesos

Re: Spark on Mesos / Executor Memory

2015-10-15 Thread Bharath Ravi Kumar
Resending since user@mesos bounced earlier. My apologies. On Thu, Oct 15, 2015 at 12:19 PM, Bharath Ravi Kumar <reachb...@gmail.com> wrote: > (Reviving this thread since I ran into similar issues...) > > I'm running two spark jobs (in mesos fine grained mode), each belonging t

Re: Spark on Mesos / Executor Memory

2015-10-15 Thread Bharath Ravi Kumar
(Reviving this thread since I ran into similar issues...) I'm running two spark jobs (in mesos fine grained mode), each belonging to a different mesos role, say low and high. The low:high mesos weights are 1:10. On expected lines, I see that the low priority job occupies cluster resources

Re: spark on mesos gets killed by cgroups for too much memory

2015-09-23 Thread Dick Davies
e thing I don't get is why is it trying to take all 3GB at startup? > That seems excessive. So if I want to run a job that only needs 512MB, I > need to have 3GB free at all times? Doesn't make sense. > > We are using sparks native mesos support. On spark submit we use:

Spark standalone/Mesos on top of Ceph

2015-09-22 Thread fightf...@163.com
Hi guys, Here is the info for Ceph : http://ceph.com/ We are investigating and using Ceph for distributed storage and monitoring, specifically interested in using Ceph as the underlied file system storage for spark. However, we had no experience for achiveing that. Any body has seen such

Re: Spark standalone/Mesos on top of Ceph

2015-09-22 Thread Jerry Lam
Do you have specific reasons to use Ceph? I used Ceph before, I'm not too in love with it especially when I was using the Ceph Object Gateway S3 API. There are some incompatibilities with aws s3 api. You really really need to try it because making the commitment. Did you managed to install it? On

Re: Re: Spark standalone/Mesos on top of Ceph

2015-09-22 Thread Jerry Lam
t; > Best, > Sun. > > -- > fightf...@163.com > > > *From:* Jerry Lam <chiling...@gmail.com> > *Date:* 2015-09-23 09:37 > *To:* fightf...@163.com > *CC:* user <user@spark.apache.org> > *Subject:* Re: Spark standalone/Mesos on t

Re: Re: Spark standalone/Mesos on top of Ceph

2015-09-22 Thread fightf...@163.com
Lam Date: 2015-09-23 09:37 To: fightf...@163.com CC: user Subject: Re: Spark standalone/Mesos on top of Ceph Do you have specific reasons to use Ceph? I used Ceph before, I'm not too in love with it especially when I was using the Ceph Object Gateway S3 API. There are some incompatibilities

spark on mesos gets killed by cgroups for too much memory

2015-09-22 Thread oggie
I'm using spark 1.2.2 on mesos 0.21 I have a java job that is submitted to mesos from marathon. I also have cgroups configured for mesos on each node. Even though the job, when running, uses 512MB, it tries to take over 3GB at startup and is killed by cgroups. When I start mesos-slave, It's

Python Packages in Spark w/Mesos

2015-09-21 Thread John Omernik
to include in the distribution, right?. I thought of using the Docker Mesos integration, but I have been unable to find information on this (see my other question on Docker/Mesos/Spark). Any other thoughts on the best way to include packages in Spark WITHOUT installing on each node would be appreciated

Re: Python Packages in Spark w/Mesos

2015-09-21 Thread Tim Chen
ation on this (see my other question on Docker/Mesos/Spark). > Any other thoughts on the best way to include packages in Spark WITHOUT > installing on each node would be appreciated! > > John >

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-21 Thread Alan Braithwaite
esn't rule the world yet). >>> >>> So I can see this from both perspectives now and passing in the >>> properties file will probably work just fine for me, but for my better >>> understanding: When the executor starts, will it read any of the >>> environm

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-19 Thread Timothy Chen
of just mesos/docker (since I'm fully aware that docker >>> doesn't rule the world yet). >>> >>> So I can see this from both perspectives now and passing in the properties >>> file will probably work just fine for me, but for my better understanding: >>&

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-19 Thread Tim Chen
perties given to it by the > dispatcher and nothing more? > > Lemme know if anything needs more clarification and thanks for your mesos > contribution to spark! > > - Alan > > On Thu, Sep 17, 2015 at 5:03 PM, Timothy Chen <t...@mesosphere.io> wrote: > >

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-17 Thread Alan Braithwaite
>>> Adding this info to the docs would be great. Is the appropriate action >>> to create an issue regarding improvement of the docs? For those of us who >>> are gaining the experience having such a pointer is very helpful. >>> >>> Tom >>>

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-17 Thread Alan Braithwaite
;> >> Tom >> >> From: Tim Chen <t...@mesosphere.io> >> Date: Thursday, September 10, 2015 at 10:25 AM >> To: Tom Waterhouse <tomwa...@cisco.com> >> Cc: "user@spark.apache.org" <user@spark.apache.org> >> Subject: Re: Spark on Mes

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-17 Thread Alan Braithwaite
you for the explanation. You are correct, my Mesos experience is >>>> very light, and I haven’t deployed anything via Marathon yet. What you >>>> have stated here makes sense, I will look into doing this. >>>> >>>> Adding this info to the docs wo

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-17 Thread Timothy Chen
ou >>> have stated here makes sense, I will look into doing this. >>> >>> Adding this info to the docs would be great. Is the appropriate action to >>> create an issue regarding improvement of the docs? For those of us who are >>> gaining the e

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-17 Thread Alan Braithwaite
the properties given to it by the dispatcher and nothing more? Lemme know if anything needs more clarification and thanks for your mesos contribution to spark! - Alan On Thu, Sep 17, 2015 at 5:03 PM, Timothy Chen <t...@mesosphere.io> wrote: > Hi Alan, > > If I understand correctly,

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-11 Thread Tim Chen
> Cc: "user@spark.apache.org" <user@spark.apache.org> > Subject: Re: Spark on Mesos with Jobs in Cluster Mode Documentation > > Hi Tom, > > Sorry the documentation isn't really rich, since it's probably assuming > users understands how Mesos and framework

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-11 Thread Tom Waterhouse (tomwater)
om<mailto:tomwa...@cisco.com>> Cc: "user@spark.apache.org<mailto:user@spark.apache.org>" <user@spark.apache.org<mailto:user@spark.apache.org>> Subject: Re: Spark on Mesos with Jobs in Cluster Mode Documentation Hi Tom, Sorry the documentation isn't really rich,

Re: Can not allocate executor when running spark on mesos

2015-09-10 Thread Iulian Dragoș
>>> fine-grained(default)). Have you gone through this documentation already? >>> http://spark.apache.org/docs/latest/running-on-mesos.html#using-a-mesos-master-url >>> >>> Thanks >>> Best Regards >>> >>> On Tue, Sep 8, 2015 at 12:54 PM, canan chen

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-10 Thread Tim Chen
services with Marathon, and you can use Marathon to launch the Spark dispatcher. Then all clients instead of specifying the Mesos master URL (e.g: mesos://mesos.master:2181), then just talks to the dispatcher only (mesos://spark-dispatcher.mesos:7077), and the dispatcher will then start and watch

Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-10 Thread Tom Waterhouse (tomwater)
figure the system. As running there is one instance of the Spark Mesos dispatcher running outside of Mesos, so not a part of the sphere of Mesos resource management. I used the following Stack Overflow posts as guidelines: http://stackoverflow.com/questions/31164725/spark-mesos-dispatche

Re: Can not allocate executor when running spark on mesos

2015-09-09 Thread canan chen
coarse-grained or >> fine-grained(default)). Have you gone through this documentation already? >> http://spark.apache.org/docs/latest/running-on-mesos.html#using-a-mesos-master-url >> >> Thanks >> Best Regards >> >> On Tue, Sep 8, 2015 at 12:54 PM, can

Can not allocate executor when running spark on mesos

2015-09-08 Thread canan chen
Hi all, I try to run spark on mesos, but it looks like I can not allocate resources from mesos. I am not expert of mesos, but from the mesos log, it seems spark always decline the offer from mesos. Not sure what's wrong, maybe need some configuration change. Here's the mesos master log I0908 15

Re: Can not allocate executor when running spark on mesos

2015-09-08 Thread Akhil Das
ccn...@gmail.com> wrote: > Hi all, > > I try to run spark on mesos, but it looks like I can not allocate > resources from mesos. I am not expert of mesos, but from the mesos log, it > seems spark always decline the offer from mesos. Not sure what's wrong, > maybe need some conf

Re: Can not allocate executor when running spark on mesos

2015-09-08 Thread canan chen
tion already? > http://spark.apache.org/docs/latest/running-on-mesos.html#using-a-mesos-master-url > > Thanks > Best Regards > > On Tue, Sep 8, 2015 at 12:54 PM, canan chen <ccn...@gmail.com> wrote: > >> Hi all, >> >> I try to run spark on mesos, but it lo

spark on mesos with docker from private repository

2015-08-05 Thread Eyal Fink
is that as far as I understand I need this file in the root directory of the executor dir and I can't find a way to make spark executor to pull this file (not without changing spark code). Am I missing something? It seems that spark do support mesos+docker so I wonder what other people with this setup

Re: Running multiple batch jobs in parallel using Spark on Mesos

2015-08-04 Thread Akhil Das
. Thanks Best Regards On Mon, Aug 3, 2015 at 2:25 PM, Akash Mishra akash.mishr...@gmail.com wrote: Hello *, We are trying to build some Batch jobs using Spark on Mesos. Mesos offer's two main mode of deployment of Spark job. 1. Fine-grained 2. Coarse-grained When we are running the spark

Running multiple batch jobs in parallel using Spark on Mesos

2015-08-03 Thread Akash Mishra
Hello *, We are trying to build some Batch jobs using Spark on Mesos. Mesos offer's two main mode of deployment of Spark job. 1. Fine-grained 2. Coarse-grained When we are running the spark jobs in fine grained mode then spark is using max amount of offers from Mesos and running the job

Re: Spark on Mesos - Shut down failed while running spark-shell

2015-07-28 Thread Tim Chen
...@gmail.com wrote: Hi all, I am running Spark 1.4.1 on mesos 0.23.0 While I am able to start spark-shell on the node with mesos-master running, it works fine. But when I try to start spark-shell on mesos-slave nodes, I'm encounter this error. I greatly appreciate any help. 15/07/27 22:14

  1   2   >