Hi All,
This is a bit late, but I found it helpful. Piggy-backing on Wang Hao's
comment, spark will ignore the spark.executor.memory setting if you add
it to SparkConf via:
conf.set(spark.executor.memory, 1g)
What you actually should do depends on how you run spark. I found some
official
Hi, Laurent
You could set Spark.executor.memory and heap size by following methods:
1. in you conf/spark-env.sh:
*export SPARK_WORKER_MEMORY=38g*
*export SPARK_JAVA_OPTS=-XX:-UseGCOverheadLimit
-XX:+UseConcMarkSweepGC -Xmx2g -XX:MaxPermSize=256m*
2. you could also add modification for
Hi,
Can you give us a little more insight on how you used that file to solve
your problem ?
We're having the same OOM as you were and haven't been able to solve it yet.
Thanks
--
View this message in context:
thank you, i add setJars, but nothing changes
val conf = new SparkConf()
.setMaster(spark://127.0.0.1:7077)
.setAppName(Simple App)
.set(spark.executor.memory, 1g)
.setJars(Seq(target/scala-2.10/simple-project_2.10-1.0.jar))
val sc = new SparkContext(conf)
--
try the complete path
qinwei
From: wxhsdpDate: 2014-04-24 14:21To: userSubject: Re: how to set
spark.executor.memory and heap sizethank you, i add setJars, but nothing changes
val conf = new SparkConf()
.setMaster(spark://127.0.0.1:7077)
.setAppName(Simple App)
i tried, but no effect
Qin Wei wrote
try the complete path
qinwei
From: wxhsdpDate: 2014-04-24 14:21To: userSubject: Re: how to set
spark.executor.memory and heap sizethank you, i add setJars, but nothing
changes
val conf = new SparkConf()
i think maybe it's the problem of read local file
val logFile = /home/wxhsdp/spark/example/standalone/README.md
val logData = sc.textFile(logFile).cache()
if i replace the above code with
val logData = sc.parallelize(Array(1,2,3,4)).cache()
the job can complete successfully
can't i read a
You need to use proper url format:
file://home/wxhsdp/spark/example/standalone/README.md
On Thu, Apr 24, 2014 at 1:29 PM, wxhsdp wxh...@gmail.com wrote:
i think maybe it's the problem of read local file
val logFile = /home/wxhsdp/spark/example/standalone/README.md
val logData =
Sorry wrong format:
file:///home/wxhsdp/spark/example/standalone/README.md
An extra / is needed at the start.
On Thu, Apr 24, 2014 at 1:46 PM, Adnan Yaqoob nsyaq...@gmail.com wrote:
You need to use proper url format:
file://home/wxhsdp/spark/example/standalone/README.md
On Thu, Apr 24,
thanks for your reply, adnan, i tried
val logFile = file:///home/wxhsdp/spark/example/standalone/README.md
i think there needs three left slash behind file:
it's just the same as val logFile =
home/wxhsdp/spark/example/standalone/README.md
the error remains:(
--
View this message in context:
Hi,
You should be able to read it, file://or file:/// not even required for
reading locally , just path is enough..
what error message you getting on spark-shell while reading...
for local:
Also read the same from hdfs file also ...
put your README file there and read , it works in both ways..
hi arpit,
on spark shell, i can read local file properly,
but when i use sbt run, error occurs.
the sbt error message is in the beginning of the thread
Arpit Tak-2 wrote
Hi,
You should be able to read it, file://or file:/// not even required for
reading locally , just path is enough..
Okk fine,
try like this , i tried and it works..
specify spark path also in constructor...
and also
export SPARK_JAVA_OPTS=-Xms300m -Xmx512m -XX:MaxPermSize=1g
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
object SimpleApp {
def main(args:
it seems that it's nothing about settings, i tried take action, and find it's
ok, but error occurs when i tried count and collect
val a = sc.textFile(any file)
a.take(n).foreach(println) //ok
a.count() //failed
a.collect()//failed
val b = sc.parallelize((Array(1,2,3,4))
anyone knows the reason? i've googled a bit, and found some guys had the same
problem, but with no replies...
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4796.html
Sent from the Apache Spark User
i noticed that error occurs
at
org.apache.hadoop.io.WritableUtils.readCompressedStringArray(WritableUtils.java:183)
at
org.apache.hadoop.conf.Configuration.readFields(Configuration.java:2378)
at
Hi
I am also curious about this question.
The textFile function was supposed to read a hdfs file? In this case
,It is on local filesystem that the file was taken in.There are any
recognization ways to identify the local filesystem and the hdfs in the
textFile function?
Beside, the OOM
by the way, codes run ok in spark shell
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4720.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
When I was testing spark, I faced this issue, this issue is not related to
memory shortage, It is because your configurations are not correct. Try to
pass you current Jar to to the SparkContext with SparkConf's setJars
function and try again.
On Thu, Apr 24, 2014 at 8:38 AM, wxhsdp
19 matches
Mail list logo