--
*发件人:* Barak Gitsis;bar...@similarweb.com;
*发送时间:* 2015年8月2日(星期天) 晚上9:55
*收件人:* Sea261810...@qq.com; Ted Yuyuzhih...@gmail.com;
*抄送:* user@spark.apache.orguser@spark.apache.org; rxin
r...@databricks.com; joshrosenjoshro...@databricks.com; davies
dav...@databricks.com;
*主题:* Re: About
don't know whether it will still exist in the next release 1.5, I wish
not.
-- 原始邮件 --
*发件人:* Barak Gitsis;bar...@similarweb.com;
*发送时间:* 2015年8月2日(星期天) 晚上9:55
*收件人:* Sea261810...@qq.com; Ted Yuyuzhih...@gmail.com;
*抄送:* user@spark.apache.orguser
hi,
I've run into some poor RF behavior, although not as pronounced as you..
would be great to get more insight into this one
Thanks!
On Mon, Aug 3, 2015 at 8:21 AM pkphlam pkph...@gmail.com wrote:
Hi,
This might be a long shot, but has anybody run into very poor predictive
performance
Hi,
reducing spark.storage.memoryFraction did the trick for me. Heap doesn't
get filled because it is reserved..
My reasoning is:
I give executor all the memory i can give it, so that makes it a boundary.
From here i try to make the best use of memory I can.
storage.memoryFraction is in a sense
It is ok with spark 1.3.0, the problem is with spark 1.4.1.
I don't think spark.storage.memoryFraction will make any sense,
because it is still in heap memory.
-- 原始邮件 --
*发件人:* Barak Gitsis;bar...@similarweb.com;
*发送时间:* 2015年8月2日(星期天) 下午4:11
*收件人