nnection
> java.nio.channels.ClosedChannelException
>
> Error in dmesg:
> [799873.309897] Out of memory: Kill process 50001 (java) score 927 or
> sacrifice child
> [799873.314439] Killed process 50001 (java) total-vm:65652448kB,
> anon-rss:57246528kB, file-r
in sparkr?
I’m using Mesos 1.0.1 and Spark 2.0.1
Thanks.
--
<http://www.orchardplatform.com/>
Rodrick Brown / Site Reliability Engineer
+1 917 445 6839 / rodr...@orchardplatform.com
<mailto:char...@orchardplatform.com>
Orchard Platform
101 5th Avenue, 4th Floor, New York, NY
he stage “collect at NaiveBayes.scala:400”.
>
> At this stage, starting 375 tasks very fast and going slowing down at this
> point. Task count could not became 500, getting OOM at 380-390th task.
>
>
>
>
--
[image: Orchard Platform] <http://www.orchardplatform.
-Duser.timezone=UTC
-Xloggc:garbage-collector.log
--
<http://www.orchardplatform.com/>
Rodrick Brown / DevOPs Engineer
+1 917 445 6839 / rodr...@orchardplatform.com
<mailto:char...@orchardplatform.com>
Orchard Platform
101 5th Avenue, 4th Floor, New York, NY 10003
http://www.orchard
>
> Thanks
>
https://github.com/couchbase/couchbase-spark-connector
--
[image: Orchard Platform] <http://www.orchardplatform.com/>
*Rodrick Brown */ *DevOPs*
9174456839 / rodr...@orchardplatform.com
Orchard Platform
101 5th Avenue, 4th Floor, New York, NY
--
*NOTICE TO RECIPIENTS*: T
s.get('jarfile'), params.get('optargs')))
i.e. SPARK_MESOS_DRIVER_MEM = CHRONOS_RESOURCE_MEM * .7
We basically set the upper limit for the driver to use 70% of what's
allocated in total to Mesos.
This will help get around Mesos killing long running jobs because of OOM.
--
[image: Orchard Platfo
w.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
> http://talebzadehmich.wordpress.com
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relyin
I couldn’t find any information on this subject in the docs – am I missing
> something?
>
>
>
> Thanks for any hints,
>
> Peter
>
--
[image: Orchard Platform] <http://www.orchardplatform.com/>
*Rodrick Brown */ *DevOPs*
9174456839 / rodr...@orchardplatform.com
O
ttp://www.orchardplatform.com/>
Rodrick Brown / DevOPs Engineer
+1 917 445 6839 / rodr...@orchardplatform.com
<mailto:char...@orchardplatform.com>
Orchard Platform
101 5th Avenue, 4th Floor, New York, NY 10003
http://www.orchardplatform.com <http://www.orchardplatform.com/>
t; View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-use-spark-mesos-constraints-tp25541.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------
>> To
ingJobType": false,
"errorsSinceLastSuccess": 0,
"uris": [ "file:///data/orchard/R/sparkr_env.R",
"file:///data/orchard/R/applepie_loan_detail.R"],
"environmentVariables": [
{
"name": "SPAR
Is this Yarn or Mesos? For the later you need to start an external shuffle
service.
Get Outlook for iOS
On Fri, May 20, 2016 at 11:48 AM -0700, "Cui, Weifeng" wrote:
Hi guys,
Our team has a hadoop 2.6.0 cluster with Spark 1.6.1. We want to set
unsubscribe
\--
**Rodrick Brown** / Systems Engineer
+1 917 445 6839 /
[rodr...@orchardplatform.com](mailto:char...@orchardplatform.com)
**Orchard Platform**
101 5th Avenue, 4th Floor, New York, NY 10003
[http://www.orchardplatform.com](http://www.orchardplatform.com/)
[Orchard
We have similar jobs consuming from Kafka and writing to elastic search and the
culprit is usually not enough memory for the executor or driver or not enough
executors in general to process the job try using dynamic allocation if you're
not too sure about how many cores/executors you actually
.
\--
**Rodrick Brown** / Systems Engineer
+1 917 445 6839 /
[rodr...@orchardplatform.com](mailto:char...@orchardplatform.com)
**Orchard Platform**
101 5th Avenue, 4th Floor, New York, NY 10003
[http://www.orchardplatform.com](http://www.orchardplatform.com/)
[Orchard Blog](http
Try increasing the memory allocated for this job.
Sent from Outlook for iPhone
On Sun, Apr 10, 2016 at 9:12 PM -0700, "Bijay Kumar Pathak"
wrote:
Hi,
I am running Spark 1.6 on EMR. I have workflow which does the following
things:Read the 2 flat file, create the
le.io.numConnectionsPerPeer 3
spark.shuffle.service.enabled true
spark.files.fetchTimeout 120s
spark.akka.timeout 250s
spark.dynamicAllocation.enabled true
Anyone other suggestions at this point? I'm not sure what else to do at this
point.
\
17 matches
Mail list logo