Re: Disable queuing of spark job on Mesos cluster if sufficient resources are not found

2017-05-30 Thread Michael Gummelt
mev...@sky.optymyze.com> wrote: > Is there any configurable timeout which controls queuing of the driver in > Mesos cluster mode or the driver will remain in queue for indefinite until > it find resource on cluster? > > > > *From:* Michael Gummelt [mailto:mgumm...@mesosphere.io] > *Sen

Re: Disable queuing of spark job on Mesos cluster if sufficient resources are not found

2017-05-26 Thread Michael Gummelt
are queuing up on Mesos dispatcher UI. > > Is it possible to tweak some configuration so that my job submission fails > gracefully(instead of queuing up) if sufficient resources are not found on > Mesos cluster? > > Regards, > > Vatsal > -- Michael Gummelt Software Engineer Mesosphere

Re: One question / kerberos, yarn-cluster -> connection to hbase

2017-05-24 Thread Michael Gummelt
; rdd.foreachPartition { partition => > > val connection = Utils.getHbaseConnection(propsObj)._1 > > val table = … > > partition.foreach { json => > > > > } > > table.put(puts) > > table.close() > > connection.close() > > } > > } > > } > > > > > > Keytab file is not getting copied to yarn staging/temp directory, we are > not getting that in SparkFiles.get… and if we pass keytab with --files, > spark-submit is failing because it’s there in --keytab already. > > > > Thanks, > > Sudhir > -- Michael Gummelt Software Engineer Mesosphere

Re: Not able pass 3rd party jars to mesos executors

2017-05-18 Thread Michael Gummelt
gt; 1001560.n3.nabble.com/Not-able-pass-3rd-party-jars-to- > mesos-executors-tp26918p28689.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > ----- > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > -- Michael Gummelt Software Engineer Mesosphere

Re: Spark diclines mesos offers

2017-04-24 Thread Michael Gummelt
less workers than the possible > maximum) and the maximum threshold in the spark configuration is not > reached and the queue have lot of pending tasks. > > May be I have wrong spark or mesos configuration? Does anyone have the > same problems? > -- Michael Gummelt Software Engineer Mesosphere

Re: Spark on Mesos with Docker in bridge networking mode

2017-02-17 Thread Michael Gummelt
g.Thread.run(Thread.java:745) > > I was trying to follow instructions here: > https://github.com/apache/spark/pull/15120 > So in my Marathon json I'm defining the ports to use for the spark driver, > spark ui and block manager. > > Can anyone help me get this running in br

Re: Dynamic resource allocation to Spark on Mesos

2017-02-09 Thread Michael Gummelt
Sun Rui <sunrise_...@163.com> wrote: > Michael, > No. We directly launch the external shuffle service by specifying a larger > heap size than default on each worker node. It is observed that the > processes are quite stable. > > On Feb 9, 2017, at 05:21, Michael Gummelt <

Re: Dynamic resource allocation to Spark on Mesos

2017-02-08 Thread Michael Gummelt
In terms of job, do you mean jobs inside a Spark application or jobs among > different applications? Maybe you can read http://spark.apache.org/ > docs/latest/job-scheduling.html for help. > > On Jan 31, 2017, at 03:34, Michael Gummelt <mgumm...@mesosphere.io> wrote: > >

Re: Launching an Spark application in a subset of machines

2017-02-07 Thread Michael Gummelt
Spark installed on >> them. >> - I want to launch one Spark application through spark submit. However I >> want this application to run on only a subset of these machines, >> disregarding data locality. (e.g. 10 machines) >> >> Is this possible?. Is there any op

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread Michael Gummelt
s) is being overriden > > On Thu, Feb 2, 2017 at 1:30 PM, Michael Gummelt <mgumm...@mesosphere.io> > wrote: > >> As of Spark 2.0, Mesos mode does support setting cores on the executor >> level, but you might need to set the property directly (--conf >> spark.

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread Michael Gummelt
, but the configuration is the same. On Thu, Feb 2, 2017 at 1:06 PM, Ji Yan <ji...@drive.ai> wrote: > I was mainly confused why this is the case with memory, but with cpu > cores, it is not specified on per executor level > > On Thu, Feb 2, 2017 at 1:02 PM, Michael Gummelt <mgumm...@m

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread Michael Gummelt
-executor-memory for memory, > and --total-executor-cores for cpu cores > > On Thu, Feb 2, 2017 at 12:56 PM, Michael Gummelt <mgumm...@mesosphere.io> > wrote: > >> What CLI args are your referring to? I'm aware of spark-submit's >> arguments (--executor-memory, --

Re: Dynamic resource allocation to Spark on Mesos

2017-02-02 Thread Michael Gummelt
requirement. > > On Mon, Jan 30, 2017 at 11:34 AM, Michael Gummelt <mgumm...@mesosphere.io> > wrote: > >> >> >> On Mon, Jan 30, 2017 at 9:47 AM, Ji Yan <ji...@drive.ai> wrote: >> >>> Tasks begin scheduling as soon as the first executor comes up

Re: Dynamic resource allocation to Spark on Mesos

2017-01-30 Thread Michael Gummelt
overall resource utilization on the cluster if when another job starts up > that has a hard requirement on resources, the extra resources to the first > job can be flexibly re-allocated to the second job. > > On Sat, Jan 28, 2017 at 2:32 PM, Michael Gummelt <mgumm...@mesosphere.io&g

Re: Dynamic resource allocation to Spark on Mesos

2017-01-28 Thread Michael Gummelt
is able to give. Is this possible with the current >> implementation? >> >> Thanks >> Ji >> >> The information in this email is confidential and may be legally >> privileged. It is intended solely for the addressee. Access to this email >> by anyone else is unautho

Re: Dynamic resource allocation to Spark on Mesos

2017-01-27 Thread Michael Gummelt
nt, any > disclosure, copying, distribution or any action taken or omitted to be > taken in reliance on it, is prohibited and may be unlawful. > -- Michael Gummelt Software Engineer Mesosphere

Re: spark locality

2017-01-12 Thread Michael Gummelt
ot sure each worker will connect to c* nodes on the same mesos > agent ? > > 2017-01-12 21:13 GMT+01:00 Michael Gummelt <mgumm...@mesosphere.io>: > >> The code in there w/ docs that reference CNI doesn't actually run when >> CNI is in effect, and doesn't have anything

Re: spark locality

2017-01-12 Thread Michael Gummelt
t; I have found this but I am not sure how it can help... > https://github.com/mesosphere/spark-build/blob/ > a9efef8850976f787956660262f3b77cd636f3f5/conf/spark-env.sh > > > 2017-01-12 20:16 GMT+01:00 Michael Gummelt <mgumm...@mesosphere.io>: > >> That's a good

Re: spark locality

2017-01-12 Thread Michael Gummelt
or Cassandra ? > > V > -- Michael Gummelt Software Engineer Mesosphere

Re: Could not parse Master URL for Mesos on Spark 2.1.0

2017-01-10 Thread Michael Gummelt
gt; > So just for the record, setting the env variable > MESOS_NATIVE_JAVA_LIBRARY="// > libmesos-1.0.0.so" fixed the whole thing. > > Thanks for the help ! > > @michael if you want to talk about the setup we're using, we can talk > about it directly [image: sim

Re: Could not parse Master URL for Mesos on Spark 2.1.0

2017-01-10 Thread Michael Gummelt
aged in the > final dist of my app… > So everything should work in theory. > > > > On Tue, Jan 10, 2017 7:22 PM, Michael Gummelt mgumm...@mesosphere.io > wrote: > >> Just build with -Pmesos http://spark.apache.org/docs/ >> latest/building-spark.html#building-with

Re: Could not parse Master URL for Mesos on Spark 2.1.0

2017-01-10 Thread Michael Gummelt
gt; >> >> >> >> >> -- >> *Abhishek J Bhandari* >> Mobile No. +1 510 493 6205 <(510)%20493-6205> (USA) >> Mobile No. +91 96387 93021 <+91%2096387%2093021> (IND) >> *R & D Department* >> *Valent Software Inc. CA* >> Email: *abhis...@valent-software.com <abhis...@valent-software.com>* >> > > > *Olivier Girardot* | Associé > o.girar...@lateral-thoughts.com > +33 6 24 09 17 94 > -- Michael Gummelt Software Engineer Mesosphere

Re: Spark/Mesos with GPU support

2016-12-30 Thread Michael Gummelt
ny action taken or omitted to be > taken in reliance on it, is prohibited and may be unlawful. > -- Michael Gummelt Software Engineer Mesosphere

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Michael Gummelt
ems that CPU usage is > just a "label" for an executor on Mesos. Where's this in the code? > > Pozdrawiam, > Jacek Laskowski > > https://medium.com/@jaceklaskowski/ > Mastering Apache Spark 2.0 https://bit.ly/mastering-apache-spark > Follow me at https://

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Michael Gummelt
on? > >> > >> Tim > >> > >> On Mon, Dec 19, 2016 at 2:45 PM, Mehdi Meziane > >> <mehdi.mezi...@ldmobile.net> wrote: > >> > We will be interested by the results if you give a try to Dynamic > >> allocation > >> >

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Michael Gummelt
cleanup its resources. > > > Regards > Sumit Chawla > > > On Mon, Dec 19, 2016 at 12:45 PM, Michael Gummelt <mgumm...@mesosphere.io> > wrote: > >> > I should preassume that No of executors should be less than number of >> tasks. >> >> No.

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Michael Gummelt
gt;> > number starts decreasing. How ever, the number of CPUs does not >>> decrease >>> > propotionally. When the job was about to finish, there was a single >>> > remaininig task, however CPU count was still 20. >>> > >>> > My questions, is why there is no one to one mapping between tasks and >>> cpus >>> > in Fine grained? How can these CPUs be released when the job is done, >>> so >>> > that other jobs can start. >>> > >>> > >>> > Regards >>> > Sumit Chawla >>> >> >> > -- Michael Gummelt Software Engineer Mesosphere

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Michael Gummelt
d and request them again later when there is demand. This feature is > particularly useful if multiple applications share resources in your Spark > cluster. > > - Mail Original - > De: "Sumit Chawla" <sumitkcha...@gmail.com> > À: "Michael Gu

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Michael Gummelt
in Fine grained? How can these CPUs be released when the job is done, so > that other jobs can start. > > > Regards > Sumit Chawla > > -- Michael Gummelt Software Engineer Mesosphere

Re: driver in queued state and not started

2016-12-06 Thread Michael Gummelt
t started. > > > Which should I check? > > > > Thanks, > > Jared, (韦煜) > Software developer > Interested in open source software, big data, Linux > -- Michael Gummelt Software Engineer Mesosphere

Re: two spark-shells spark on mesos not working

2016-11-22 Thread Michael Gummelt
hells are listed as separate, uniquely-named > Mesos frameworks and that there are plenty of CPU core and memory resources > on our cluster. > > I am using Spark 2.0.1 on Mesos 0.28.1. Any ideas that y'all may have > would be very much appreciated. > > Thanks! :) > > --John > > -- Michael Gummelt Software Engineer Mesosphere

Re: Two questions about running spark on mesos

2016-11-14 Thread Michael Gummelt
mesos in cluster mode. Then submitted a long > running job succeeded. > > Then I want to kill the job. > How could I do that? Is there any similar commands as launching spark > on yarn? > > > Thanks, > > Jared, (韦煜) > Software developer > Interested in open source software, big data, Linux > -- Michael Gummelt Software Engineer Mesosphere

Re: sanboxing spark executors

2016-11-04 Thread Michael Gummelt
ailing list archive at Nabble.com. > > - > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > -- Michael Gummelt Software Engineer Mesosphere

Re: Submit job with driver options in Mesos Cluster mode

2016-10-31 Thread Michael Gummelt
e > any products or services to any persons who are prohibited from receiving > such information under applicable law. The contents of this communication > may not be accurate or complete and are subject to change without notice. > As such, Orchard App, Inc. (including its subsidiaries and affiliates, > "Orchard") makes no representation regarding the accuracy or completeness > of the information contained herein. The intended recipient is advised to > consult its own professional advisors, including those specializing in > legal, tax and accounting matters. Orchard does not provide legal, tax or > accounting advice. > -- Michael Gummelt Software Engineer Mesosphere

Re: How to make Mesos Cluster Dispatcher of Spark 1.6.1 load my config files?

2016-10-19 Thread Michael Gummelt
luster \--supervise \* >> --executor-memory 5G \ >> --driver-memory 2G \ >> --total-executor-cores 4 \ >> --jars /build/analytics/kafkajobs/spark-streaming-kafka_2.10-1.6.2.jar \ >> /build/analytics/kafkajobs/kafkajobs-prod.jar >> >> It threw me an error: *Exception in threa

Re: No way to set mesos cluster driver memory overhead?

2016-10-13 Thread Michael Gummelt
----------- > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > -- Michael Gummelt Software Engineer Mesosphere

Re: spark on mesos memory sizing with offheap

2016-10-13 Thread Michael Gummelt
into consideration for resources allocation. > Example: spark.executor.memory=3g spark.memory.offheap.size=1g ==> mesos > report 3.4g allocated for the executor > Is there any configuration to use both heap and offheap for mesos > allocation ? > -- Michael Gummelt Software Engineer Mesosphere

Re: Sending extraJavaOptions for Spark 1.6.1 on mesos 0.28.2 in cluster mode

2016-09-20 Thread Michael Gummelt
ny idea ho to achieve this in mesos. > > -Regards > Sagar > -- Michael Gummelt Software Engineer Mesosphere

Re: very high maxresults setting (no collect())

2016-09-19 Thread Michael Gummelt
).sort($"id") > output.coalesce(1000).write.format("com.databricks.spark.csv > ").save('/tmp/...') > > Cheers for any help/pointers! There are a couple of memory leak tickets > fixed in v1.6.2 that may affect the driver so I may try an upgrade (the > executors are fine). > > Adrian > > - > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > -- Michael Gummelt Software Engineer Mesosphere

Re: No SparkR on Mesos?

2016-09-07 Thread Michael Gummelt
os cluster*. > > Error in sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap, : > > JVM is not ready after 10 seconds > > > > > > I couldn’t find any information on this subject in the docs – am I missing > something? > > > > Thanks for any hints, > > Peter > -- Michael Gummelt Software Engineer Mesosphere

Re: Mesos coarse-grained problem with spark.shuffle.service.enabled

2016-09-07 Thread Michael Gummelt
to launch it before start the > application, if the given Spark will be downloaded to the Mesos executor > after executor launch but it's looking for the started external shuffle > service in advance? > > Is it possible I can't use spark.executor.uri and spark.dynamicAllocation. &

Re: Please assist: Building Docker image containing spark 2.0

2016-08-26 Thread Michael Gummelt
R] After correcting the problems, you can resume the build with the >> command >> [ERROR] mvn -rf :spark-mllib_2.11 >> The command '/bin/sh -c ./build/mvn -Pyarn -Phadoop-2.4 >> -Dhadoop.version=2.4.0 -DskipTests clean package' returned a non-zero code: >> 1 >

Re: Please assist: Building Docker image containing spark 2.0

2016-08-26 Thread Michael Gummelt
:) On Thu, Aug 25, 2016 at 2:29 PM, Marco Mistroni <mmistr...@gmail.com> wrote: > No i wont accept that :) > I can't believe i have wasted 3 hrs for a space! > > Many thanks MIchael! > > kr > > On Thu, Aug 25, 2016 at 10:01 PM, Michael Gummelt <mgumm...@meso

Re: zookeeper mesos logging in spark

2016-08-26 Thread Michael Gummelt
------------ > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > -- Michael Gummelt Software Engineer Mesosphere

Re: What do I loose if I run spark without using HDFS or Zookeeper?

2016-08-25 Thread Michael Gummelt
ke its work in progress. At very least Mesos took the >> initiative to provide alternatives to ZK. I am just really looking forward >> for this. >> >> https://issues.apache.org/jira/browse/MESOS-3797 >> >> >> >> On Thu, Aug 25, 2016 2:00 PM, Michael Gum

Re: Please assist: Building Docker image containing spark 2.0

2016-08-25 Thread Michael Gummelt
ing Apache spark 2.0" > RUN git clone git://github.com/apache/spark.git > WORKDIR /spark > RUN ./build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests > clean package > > > Could anyone assist pls? > > kindest regarsd > Marco > > -- Michael Gummelt Software Engineer Mesosphere

Re: What do I loose if I run spark without using HDFS or Zookeeper?

2016-08-25 Thread Michael Gummelt
It >> comes as part of Hadoop core (HDFS, Map-reduce and Yarn). >> >> I have not gone and installed Yarn without installing Hadoop. >> >> What is the overriding reason to have the Spark on its own? >> >> You can use Spark in Local or Standalone mode if you do not want Hadoop >> core. >> >> HTH >> >> Dr Mich Talebzadeh >> >> >> >> LinkedIn * >> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw >> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* >> >> >> >> http://talebzadehmich.wordpress.com >> >> >> *Disclaimer:* Use it at your own risk. Any and all responsibility for >> any loss, damage or destruction of data or any other property which may >> arise from relying on this email's technical content is explicitly >> disclaimed. The author will in no case be liable for any monetary damages >> arising from such loss, damage or destruction. >> >> >> >> On 24 August 2016 at 21:54, kant kodali <kanth...@gmail.com> wrote: >> >> What do I loose if I run spark without using HDFS or Zookeper ? which of >> them is almost a must in practice? >> >> >> >> >> >> >> >> -- Michael Gummelt Software Engineer Mesosphere

Re: 2.0.1/2.1.x release dates

2016-08-19 Thread Michael Gummelt
> > - > > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > > > - > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > -- Michael Gummelt Software Engineer Mesosphere

Re: Attempting to accept an unknown offer

2016-08-19 Thread Michael Gummelt
d-8c0d-35bd91c1ad0a-O162910496 >>>>> >>>>> W0816 23:17:01.985651 16360 sched.cpp:1195] Attempting to accept an >>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910497 >>>>> >>>>> W0816 23:17:01.985801 16360 sched.cpp:1195] Attempting to accept an >>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910498 >>>>> >>>>> W0816 23:17:01.985961 16360 sched.cpp:1195] Attempting to accept an >>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910499 >>>>> >>>>> W0816 23:17:01.986121 16360 sched.cpp:1195] Attempting to accept an >>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910500 >>>>> >>>>> 2016-08-16 23:18:41,877:16226(0x7f71271b6 >>>>> 700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 13ms >>>>> >>>>> 2016-08-16 23:21:12,007:16226(0x7f71271b6 >>>>> 700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 11ms >>>>> >>>>> >>>>> >>>>> >>>> >>> >> > -- Michael Gummelt Software Engineer Mesosphere

Re: mesos or kubernetes ?

2016-08-13 Thread Michael Gummelt
> > > - > > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > > > - > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > -- Michael Gummelt Software Engineer Mesosphere

Re: mesos or kubernetes ?

2016-08-13 Thread Michael Gummelt
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > -- Michael Gummelt Software Engineer Mesosphere

Re: Spark on mesos in docker not getting parameters

2016-08-09 Thread Michael Gummelt
installed in the docker > container). > > Can someone tell me what I'm missing? > > Thanks > Jim > > > > > -- > View this message in context: http://apache-spark-user-list. > 1001560.n3.nabble.com/Spark-on-mesos-in-docker-not- > getting-parameters-tp27500.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > - > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > -- Michael Gummelt Software Engineer Mesosphere

Re: Spark Job Doesn't End on Mesos

2016-08-09 Thread Michael Gummelt
cpp:831] Stopping framework > '20160808-170425-2365980426-5050-4372-0034' > > However, the process doesn’t quit after all. This is critical, because I’d > like to use SparkLauncher to submit such jobs. If my job doesn’t end, jobs > will pile up and fill up the memory. Pls help. :-| &

Re: standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

2016-08-03 Thread Michael Gummelt
------------ > >> To unsubscribe e-mail: user-unsubscr...@spark.apache.org > >> > > > > - > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > -- Michael Gummelt Software Engineer Mesosphere

Re: Executors assigned to STS and number of workers in Stand Alone Mode

2016-08-03 Thread Michael Gummelt
backends -- > security, data locality, queues, etc. (or I might be simply biased > after having spent months with Spark on YARN mostly?). > > Jacek > > --------- > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > -- Michael Gummelt Software Engineer Mesosphere

Re: how to use spark.mesos.constraints

2016-08-03 Thread Michael Gummelt
itute > an offer to sell or a solicitation of an indication of interest to purchase > any loan, security or any other financial product or instrument, nor is it > an offer to sell or a solicitation of an indication of interest to purchase > any products or services to any persons who are p