You need to set SPARK_MEM or SPARK_EXECUTOR_MEMORY (for Spark 1.0) to
amount of memory your application needs to consume at each node. Try
setting those variables (example: export SPARK_MEM=10g) or set it via
SparkConf.set as suggested by jholee.
On Tue, Apr 22, 2014 at 4:25 PM, jaeholee
it is possible Nick. Please take a look here:
https://aws.amazon.com/articles/Elastic-MapReduce/4926593393724923
the source code is here as a pull request:
https://github.com/apache/spark/pull/223
let me know if you have any questions.
On Mon, Apr 21, 2014 at 1:00 PM, Nicholas Chammas
sorry Matei. Will definitely start working on making the changes soon :)
On Mon, Apr 21, 2014 at 1:10 PM, Matei Zaharia matei.zaha...@gmail.comwrote:
There was a patch posted a few weeks ago (
https://github.com/apache/spark/pull/223), but it needs a few changes in
packaging because it uses
I ran into the same issue. The problem seems to be with the jets3t library
that Spark uses in project/SparkBuild.scala.
change this:
net.java.dev.jets3t % jets3t % 0.7.1
to
net.java.dev.jets3t % jets3t % 0.9.0
0.7.1 is not the right version of jets3t for Hadoop
home directory or $home/conf directory? works for me with
metrics.properties hosted under conf dir.
On Tue, Apr 15, 2014 at 6:08 PM, Paul Schooss paulmscho...@gmail.comwrote:
Has anyone got this working? I have enabled the properties for it in the
metrics.conf file and ensure that it is
Spark community,
What's the size of the largest Spark cluster ever deployed? I've heard
Yahoo is running Spark on several hundred nodes but don't know the actual
number.
can someone share?
Thanks