unsubscribe

2024-05-03 Thread Bing
Replied Message | From | Wood Super | | Date | 05/01/2024 07:49 | | To | user | | Subject | unsubscribe | unsubscribe

Re: Re: spark job paused(active stages finished)

2017-11-09 Thread bing...@iflytek.com
Thank you for your reply. But,sometimes successed, when i rerun the job. And the job process the same data using the same code. From: Margusja Date: 2017-11-09 14:25 To: bing...@iflytek.com CC: user Subject: Re: spark job paused(active stages finished) You have to deal

Does spark restart the executors if its nodemanager crashes?

2016-01-12 Thread Bing Jiang
. I just want to know whether spark will resubmit the completed tasks if the latter tasks being executing cannot find the output? Thanks for any explanation. -- Bing Jiang

Package Release Annoucement: Spark SQL on HBase Astro

2015-07-22 Thread Bing Xiao (Bing)
in vertical enterprises. We will continue to work with the community to develop new features and improve code base. Your comments and suggestions are greatly appreciated. Yan Zhou / Bing Xiao Huawei Big Data team

fail to run LBFS in 5G KDD data in spark 1.0.1?

2014-08-06 Thread Lizhengbing (bing, BIPA)
1 I don't use spark_submit to run my problem and use spark context directly val conf = new SparkConf() .setMaster(spark://123d101suse11sp3:7077) .setAppName(LBFGS) .set(spark.executor.memory, 30g) .set(spark.akka.frameSize,20) val sc = new

答复: fail to run LBFS in 5G KDD data in spark 1.0.1?

2014-08-06 Thread Lizhengbing (bing, BIPA)
I have test it in spark-1.1.0-SNAPSHOT. It is ok now 发件人: Xiangrui Meng [mailto:men...@gmail.com] 发送时间: 2014年8月6日 23:12 收件人: Lizhengbing (bing, BIPA) 抄送: user@spark.apache.org 主题: Re: fail to run LBFS in 5G KDD data in spark 1.0.1? Do you mind testing 1.1-SNAPSHOT and allocating more memory

How can I integrate spark cluster into my own program without using spark-submit?

2014-07-26 Thread Lizhengbing (bing, BIPA)
I want to use spark cluster through a scala function. So I can integrate spark into my program directly. For example: When I call count function in my own program, my program will deploy the function to the cluster , so I can get the result directly def count()= { val master =

答复: Spark RDD Disk Persistance

2014-07-08 Thread Lizhengbing (bing, BIPA)
You might let your data stored in tachyon 发件人: Jahagirdar, Madhu [mailto:madhu.jahagir...@philips.com] 发送时间: 2014年7月8日 10:16 收件人: user@spark.apache.org 主题: Spark RDD Disk Persistance Should i use Disk based Persistance for RDD's and if the machine goes down during the program execution, next