Replied Message
| From | Wood Super |
| Date | 05/01/2024 07:49 |
| To | user |
| Subject | unsubscribe |
unsubscribe
Thank you for your reply.
But,sometimes successed, when i rerun the job.
And the job process the same data using the same code.
From: Margusja
Date: 2017-11-09 14:25
To: bing...@iflytek.com
CC: user
Subject: Re: spark job paused(active stages finished)
You have to deal with
again.
I just want to know whether spark will resubmit the completed tasks if the
latter tasks being executing cannot find the output?
Thanks for any explanation.
--
Bing Jiang
ency
query and analytics of large scale data sets in vertical enterprises. We will
continue to work with the community to develop new features and improve code
base. Your comments and suggestions are greatly appreciated.
Yan Zhou / Bing Xiao
Huawei Big Data team
I have test it in spark-1.1.0-SNAPSHOT.
It is ok now
发件人: Xiangrui Meng [mailto:men...@gmail.com]
发送时间: 2014年8月6日 23:12
收件人: Lizhengbing (bing, BIPA)
抄送: user@spark.apache.org
主题: Re: fail to run LBFS in 5G KDD data in spark 1.0.1?
Do you mind testing 1.1-SNAPSHOT and allocating more memory to
1 I don't use spark_submit to run my problem and use spark context directly
val conf = new SparkConf()
.setMaster("spark://123d101suse11sp3:7077")
.setAppName("LBFGS")
.set("spark.executor.memory", "30g")
.set("spark.akka.frameSize","20")
val sc =
I want to use spark cluster through a scala function. So I can integrate spark
into my program directly.
For example:
When I call count function in my own program, my program will deploy the
function to the cluster , so I can get the result directly
def count()=
{
val master = "spark://ma
You might let your data stored in tachyon
发件人: Jahagirdar, Madhu [mailto:madhu.jahagir...@philips.com]
发送时间: 2014年7月8日 10:16
收件人: user@spark.apache.org
主题: Spark RDD Disk Persistance
Should i use Disk based Persistance for RDD's and if the machine goes down
during the program execution, next ti
Sometimes, shuffle write of flatMap is 14.8G and sometimes is 647.9M
Why does this happen?
The size of training data is about 1.5G. and the feature number is 200
Stage Id
Description
Submitted
Duration
Tasks: Succeeded/Total
Shuffle Read
Shuffle Write
114
flatMap at ALS.scala:434
2014/