hi~I want to set the executor number to 16, but it is very strange that
executor cores may affect executor num on spark on yarn, i don't know why
and how to set executor number.
=
./bin/spark-submit --class com.hequn.spark.SparkJoins \
--master
I tried centralized cache step by step following the apache hadoop oficial
website, but it seems centralized cache doesn't work.
see :
http://stackoverflow.com/questions/22293358/centralized-cache-failed-in-hadoop-2-3
.
Can anyone succeed?
2014-05-15 5:30 GMT+08:00 William Kang
points.foreach(p=p.y = another_value) will return a new modified RDD.
2014-03-24 18:13 GMT+08:00 Chieh-Yen r01944...@csie.ntu.edu.tw:
Dear all,
I have a question about the usage of RDD.
I implemented a class called AppDataPoint, it looks like:
case class AppDataPoint(input_y : Double,
running, is that right?
--
发件人: hequn cheng chenghe...@gmail.com
发送时间: 2014/3/25 9:35
收件人: user@spark.apache.org
主题: Re: RDD usage
points.foreach(p=p.y = another_value) will return a new modified RDD.
2014-03-24 18:13 GMT+08:00 Chieh-Yen r01944
persist and unpersist.
unpersist:Mark the RDD as non-persistent, and remove all blocks for it from
memory and disk
2014-03-19 16:40 GMT+08:00 林武康 vboylin1...@gmail.com:
Hi, can any one tell me about the lifecycle of an rdd? I search through
the official website and still can't figure it out.
hi
hi
have your send spark-env.sh to the slave nodes ?
2014-03-11 6:47 GMT+08:00 Linlin linlin200...@gmail.com:
Hi,
I have a java option (-Xss) setting specified in SPARK_JAVA_OPTS in
spark-env.sh, noticed after stop/restart the spark cluster, the
master/worker daemon has the setting being