mev...@sky.optymyze.com>
wrote:
> Is there any configurable timeout which controls queuing of the driver in
> Mesos cluster mode or the driver will remain in queue for indefinite until
> it find resource on cluster?
>
>
>
> *From:* Michael Gummelt [mailto:mgumm...@mesosphere.io]
> *Sen
are queuing up on Mesos dispatcher UI.
>
> Is it possible to tweak some configuration so that my job submission fails
> gracefully(instead of queuing up) if sufficient resources are not found on
> Mesos cluster?
>
> Regards,
>
> Vatsal
>
--
Michael Gummelt
Software Engineer
Mesosphere
; rdd.foreachPartition { partition =>
>
> val connection = Utils.getHbaseConnection(propsObj)._1
>
> val table = …
>
> partition.foreach { json =>
>
>
>
> }
>
> table.put(puts)
>
> table.close()
>
> connection.close()
>
> }
>
> }
>
> }
>
>
>
>
>
> Keytab file is not getting copied to yarn staging/temp directory, we are
> not getting that in SparkFiles.get… and if we pass keytab with --files,
> spark-submit is failing because it’s there in --keytab already.
>
>
>
> Thanks,
>
> Sudhir
>
--
Michael Gummelt
Software Engineer
Mesosphere
gt; 1001560.n3.nabble.com/Not-able-pass-3rd-party-jars-to-
> mesos-executors-tp26918p28689.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -----
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
less workers than the possible
> maximum) and the maximum threshold in the spark configuration is not
> reached and the queue have lot of pending tasks.
>
> May be I have wrong spark or mesos configuration? Does anyone have the
> same problems?
>
--
Michael Gummelt
Software Engineer
Mesosphere
g.Thread.run(Thread.java:745)
>
> I was trying to follow instructions here:
> https://github.com/apache/spark/pull/15120
> So in my Marathon json I'm defining the ports to use for the spark driver,
> spark ui and block manager.
>
> Can anyone help me get this running in br
Sun Rui <sunrise_...@163.com> wrote:
> Michael,
> No. We directly launch the external shuffle service by specifying a larger
> heap size than default on each worker node. It is observed that the
> processes are quite stable.
>
> On Feb 9, 2017, at 05:21, Michael Gummelt <
In terms of job, do you mean jobs inside a Spark application or jobs among
> different applications? Maybe you can read http://spark.apache.org/
> docs/latest/job-scheduling.html for help.
>
> On Jan 31, 2017, at 03:34, Michael Gummelt <mgumm...@mesosphere.io> wrote:
>
>
Spark installed on
>> them.
>> - I want to launch one Spark application through spark submit. However I
>> want this application to run on only a subset of these machines,
>> disregarding data locality. (e.g. 10 machines)
>>
>> Is this possible?. Is there any op
s) is being overriden
>
> On Thu, Feb 2, 2017 at 1:30 PM, Michael Gummelt <mgumm...@mesosphere.io>
> wrote:
>
>> As of Spark 2.0, Mesos mode does support setting cores on the executor
>> level, but you might need to set the property directly (--conf
>> spark.
, but the configuration is the same.
On Thu, Feb 2, 2017 at 1:06 PM, Ji Yan <ji...@drive.ai> wrote:
> I was mainly confused why this is the case with memory, but with cpu
> cores, it is not specified on per executor level
>
> On Thu, Feb 2, 2017 at 1:02 PM, Michael Gummelt <mgumm...@m
-executor-memory for memory,
> and --total-executor-cores for cpu cores
>
> On Thu, Feb 2, 2017 at 12:56 PM, Michael Gummelt <mgumm...@mesosphere.io>
> wrote:
>
>> What CLI args are your referring to? I'm aware of spark-submit's
>> arguments (--executor-memory, --
requirement.
>
> On Mon, Jan 30, 2017 at 11:34 AM, Michael Gummelt <mgumm...@mesosphere.io>
> wrote:
>
>>
>>
>> On Mon, Jan 30, 2017 at 9:47 AM, Ji Yan <ji...@drive.ai> wrote:
>>
>>> Tasks begin scheduling as soon as the first executor comes up
overall resource utilization on the cluster if when another job starts up
> that has a hard requirement on resources, the extra resources to the first
> job can be flexibly re-allocated to the second job.
>
> On Sat, Jan 28, 2017 at 2:32 PM, Michael Gummelt <mgumm...@mesosphere.io&g
is able to give. Is this possible with the current
>> implementation?
>>
>> Thanks
>> Ji
>>
>> The information in this email is confidential and may be legally
>> privileged. It is intended solely for the addressee. Access to this email
>> by anyone else is unautho
nt, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful.
>
--
Michael Gummelt
Software Engineer
Mesosphere
ot sure each worker will connect to c* nodes on the same mesos
> agent ?
>
> 2017-01-12 21:13 GMT+01:00 Michael Gummelt <mgumm...@mesosphere.io>:
>
>> The code in there w/ docs that reference CNI doesn't actually run when
>> CNI is in effect, and doesn't have anything
t; I have found this but I am not sure how it can help...
> https://github.com/mesosphere/spark-build/blob/
> a9efef8850976f787956660262f3b77cd636f3f5/conf/spark-env.sh
>
>
> 2017-01-12 20:16 GMT+01:00 Michael Gummelt <mgumm...@mesosphere.io>:
>
>> That's a good
or Cassandra ?
>
> V
>
--
Michael Gummelt
Software Engineer
Mesosphere
gt;
> So just for the record, setting the env variable
> MESOS_NATIVE_JAVA_LIBRARY="//
> libmesos-1.0.0.so" fixed the whole thing.
>
> Thanks for the help !
>
> @michael if you want to talk about the setup we're using, we can talk
> about it directly [image: sim
aged in the
> final dist of my app…
> So everything should work in theory.
>
>
>
> On Tue, Jan 10, 2017 7:22 PM, Michael Gummelt mgumm...@mesosphere.io
> wrote:
>
>> Just build with -Pmesos http://spark.apache.org/docs/
>> latest/building-spark.html#building-with
gt;
>>
>>
>>
>>
>> --
>> *Abhishek J Bhandari*
>> Mobile No. +1 510 493 6205 <(510)%20493-6205> (USA)
>> Mobile No. +91 96387 93021 <+91%2096387%2093021> (IND)
>> *R & D Department*
>> *Valent Software Inc. CA*
>> Email: *abhis...@valent-software.com <abhis...@valent-software.com>*
>>
>
>
> *Olivier Girardot* | Associé
> o.girar...@lateral-thoughts.com
> +33 6 24 09 17 94
>
--
Michael Gummelt
Software Engineer
Mesosphere
ny action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful.
>
--
Michael Gummelt
Software Engineer
Mesosphere
ems that CPU usage is
> just a "label" for an executor on Mesos. Where's this in the code?
>
> Pozdrawiam,
> Jacek Laskowski
>
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark 2.0 https://bit.ly/mastering-apache-spark
> Follow me at https://
on?
> >>
> >> Tim
> >>
> >> On Mon, Dec 19, 2016 at 2:45 PM, Mehdi Meziane
> >> <mehdi.mezi...@ldmobile.net> wrote:
> >> > We will be interested by the results if you give a try to Dynamic
> >> allocation
> >> >
cleanup its resources.
>
>
> Regards
> Sumit Chawla
>
>
> On Mon, Dec 19, 2016 at 12:45 PM, Michael Gummelt <mgumm...@mesosphere.io>
> wrote:
>
>> > I should preassume that No of executors should be less than number of
>> tasks.
>>
>> No.
gt;> > number starts decreasing. How ever, the number of CPUs does not
>>> decrease
>>> > propotionally. When the job was about to finish, there was a single
>>> > remaininig task, however CPU count was still 20.
>>> >
>>> > My questions, is why there is no one to one mapping between tasks and
>>> cpus
>>> > in Fine grained? How can these CPUs be released when the job is done,
>>> so
>>> > that other jobs can start.
>>> >
>>> >
>>> > Regards
>>> > Sumit Chawla
>>>
>>
>>
>
--
Michael Gummelt
Software Engineer
Mesosphere
d and request them again later when there is demand. This feature is
> particularly useful if multiple applications share resources in your Spark
> cluster.
>
> - Mail Original -
> De: "Sumit Chawla" <sumitkcha...@gmail.com>
> À: "Michael Gu
in Fine grained? How can these CPUs be released when the job is done, so
> that other jobs can start.
>
>
> Regards
> Sumit Chawla
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
t started.
>
>
> Which should I check?
>
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
--
Michael Gummelt
Software Engineer
Mesosphere
hells are listed as separate, uniquely-named
> Mesos frameworks and that there are plenty of CPU core and memory resources
> on our cluster.
>
> I am using Spark 2.0.1 on Mesos 0.28.1. Any ideas that y'all may have
> would be very much appreciated.
>
> Thanks! :)
>
> --John
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
mesos in cluster mode. Then submitted a long
> running job succeeded.
>
> Then I want to kill the job.
> How could I do that? Is there any similar commands as launching spark
> on yarn?
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
--
Michael Gummelt
Software Engineer
Mesosphere
ailing list archive at Nabble.com.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
e
> any products or services to any persons who are prohibited from receiving
> such information under applicable law. The contents of this communication
> may not be accurate or complete and are subject to change without notice.
> As such, Orchard App, Inc. (including its subsidiaries and affiliates,
> "Orchard") makes no representation regarding the accuracy or completeness
> of the information contained herein. The intended recipient is advised to
> consult its own professional advisors, including those specializing in
> legal, tax and accounting matters. Orchard does not provide legal, tax or
> accounting advice.
>
--
Michael Gummelt
Software Engineer
Mesosphere
luster \--supervise \*
>> --executor-memory 5G \
>> --driver-memory 2G \
>> --total-executor-cores 4 \
>> --jars /build/analytics/kafkajobs/spark-streaming-kafka_2.10-1.6.2.jar \
>> /build/analytics/kafkajobs/kafkajobs-prod.jar
>>
>> It threw me an error: *Exception in threa
-----------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
into consideration for resources allocation.
> Example: spark.executor.memory=3g spark.memory.offheap.size=1g ==> mesos
> report 3.4g allocated for the executor
> Is there any configuration to use both heap and offheap for mesos
> allocation ?
>
--
Michael Gummelt
Software Engineer
Mesosphere
ny idea ho to achieve this in mesos.
>
> -Regards
> Sagar
>
--
Michael Gummelt
Software Engineer
Mesosphere
).sort($"id")
> output.coalesce(1000).write.format("com.databricks.spark.csv
> ").save('/tmp/...')
>
> Cheers for any help/pointers! There are a couple of memory leak tickets
> fixed in v1.6.2 that may affect the driver so I may try an upgrade (the
> executors are fine).
>
> Adrian
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
os cluster*.
>
> Error in sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap, :
>
> JVM is not ready after 10 seconds
>
>
>
>
>
> I couldn’t find any information on this subject in the docs – am I missing
> something?
>
>
>
> Thanks for any hints,
>
> Peter
>
--
Michael Gummelt
Software Engineer
Mesosphere
to launch it before start the
> application, if the given Spark will be downloaded to the Mesos executor
> after executor launch but it's looking for the started external shuffle
> service in advance?
>
> Is it possible I can't use spark.executor.uri and spark.dynamicAllocation.
&
R] After correcting the problems, you can resume the build with the
>> command
>> [ERROR] mvn -rf :spark-mllib_2.11
>> The command '/bin/sh -c ./build/mvn -Pyarn -Phadoop-2.4
>> -Dhadoop.version=2.4.0 -DskipTests clean package' returned a non-zero code:
>> 1
>
:)
On Thu, Aug 25, 2016 at 2:29 PM, Marco Mistroni <mmistr...@gmail.com> wrote:
> No i wont accept that :)
> I can't believe i have wasted 3 hrs for a space!
>
> Many thanks MIchael!
>
> kr
>
> On Thu, Aug 25, 2016 at 10:01 PM, Michael Gummelt <mgumm...@meso
------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
ke its work in progress. At very least Mesos took the
>> initiative to provide alternatives to ZK. I am just really looking forward
>> for this.
>>
>> https://issues.apache.org/jira/browse/MESOS-3797
>>
>>
>>
>> On Thu, Aug 25, 2016 2:00 PM, Michael Gum
ing Apache spark 2.0"
> RUN git clone git://github.com/apache/spark.git
> WORKDIR /spark
> RUN ./build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests
> clean package
>
>
> Could anyone assist pls?
>
> kindest regarsd
> Marco
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
It
>> comes as part of Hadoop core (HDFS, Map-reduce and Yarn).
>>
>> I have not gone and installed Yarn without installing Hadoop.
>>
>> What is the overriding reason to have the Spark on its own?
>>
>> You can use Spark in Local or Standalone mode if you do not want Hadoop
>> core.
>>
>> HTH
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn *
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 24 August 2016 at 21:54, kant kodali <kanth...@gmail.com> wrote:
>>
>> What do I loose if I run spark without using HDFS or Zookeper ? which of
>> them is almost a must in practice?
>>
>>
>>
>>
>>
>>
>>
>>
--
Michael Gummelt
Software Engineer
Mesosphere
> > -
> > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
d-8c0d-35bd91c1ad0a-O162910496
>>>>>
>>>>> W0816 23:17:01.985651 16360 sched.cpp:1195] Attempting to accept an
>>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910497
>>>>>
>>>>> W0816 23:17:01.985801 16360 sched.cpp:1195] Attempting to accept an
>>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910498
>>>>>
>>>>> W0816 23:17:01.985961 16360 sched.cpp:1195] Attempting to accept an
>>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910499
>>>>>
>>>>> W0816 23:17:01.986121 16360 sched.cpp:1195] Attempting to accept an
>>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910500
>>>>>
>>>>> 2016-08-16 23:18:41,877:16226(0x7f71271b6
>>>>> 700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 13ms
>>>>>
>>>>> 2016-08-16 23:21:12,007:16226(0x7f71271b6
>>>>> 700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 11ms
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
--
Michael Gummelt
Software Engineer
Mesosphere
>
> > -
> > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
installed in the docker
> container).
>
> Can someone tell me what I'm missing?
>
> Thanks
> Jim
>
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/Spark-on-mesos-in-docker-not-
> getting-parameters-tp27500.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
cpp:831] Stopping framework
> '20160808-170425-2365980426-5050-4372-0034'
>
> However, the process doesn’t quit after all. This is critical, because I’d
> like to use SparkLauncher to submit such jobs. If my job doesn’t end, jobs
> will pile up and fill up the memory. Pls help. :-|
&
------------
> >> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >>
> >
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
backends --
> security, data locality, queues, etc. (or I might be simply biased
> after having spent months with Spark on YARN mostly?).
>
> Jacek
>
> ---------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
itute
> an offer to sell or a solicitation of an indication of interest to purchase
> any loan, security or any other financial product or instrument, nor is it
> an offer to sell or a solicitation of an indication of interest to purchase
> any products or services to any persons who are p
56 matches
Mail list logo