*发送时间:* 2015年8月4日(星期二) 晚上10:28
> *收件人:* "Igor Berman"<igor.ber...@gmail.com>;
> *抄送:* "Sea"<261810...@qq.com>; "Barak Gitsis"<bar...@similarweb.com>; "
> user@spark.apache.org"<user@spark.apache.org>; "rxin"<r...@databr
Gitsisbar...@similarweb.com;
user@spark.apache.orguser@spark.apache.org; rxinr...@databricks.com;
joshrosenjoshro...@databricks.com; daviesdav...@databricks.com;
: Re: About memory leak in spark 1.4.1
w.r.t. spark.deploy.spreadOut , here is the scaladoc:
// As a temporary workaround before
...@databricks.com;
joshrosenjoshro...@databricks.com; daviesdav...@databricks.com;
*主题:* Re: About memory leak in spark 1.4.1
in general, what is your configuration? use --conf spark.logConf=true
we have 1.4.1 in production standalone cluster and haven't experienced
what you are describing
can you verify
;
joshrosenjoshro...@databricks.com; daviesdav...@databricks.com;
: Re: About memory leak in spark 1.4.1
in general, what is your configuration? use --conf spark.logConf=true
we have 1.4.1 in production standalone cluster and haven't experienced what you
are describingcan you verify in web-ui
: About memory leak in spark 1.4.1
in general, what is your configuration? use --conf spark.logConf=true
we have 1.4.1 in production standalone cluster and haven't experienced
what you are describing
can you verify in web-ui that indeed spark got your 50g per executor
limit? I mean
dav...@databricks.com;
*主题:* Re: About memory leak in spark 1.4.1
in general, what is your configuration? use --conf spark.logConf=true
we have 1.4.1 in production standalone cluster and haven't experienced
what you are describing
can you verify in web-ui that indeed spark got your 50g per
@spark.apache.org; rxin
r...@databricks.com; joshrosenjoshro...@databricks.com; davies
dav...@databricks.com;
*主题:* Re: About memory leak in spark 1.4.1
spark uses a lot more than heap memory, it is the expected behavior.
in 1.4 off-heap memory usage is supposed to grow in comparison to 1.3
Better use
(星期天) 晚上9:55
*收件人:* Sea261810...@qq.com; Ted Yuyuzhih...@gmail.com;
*抄送:* user@spark.apache.orguser@spark.apache.org; rxin
r...@databricks.com; joshrosenjoshro...@databricks.com; davies
dav...@databricks.com;
*主题:* Re: About memory leak in spark 1.4.1
spark uses a lot more than heap memory
Hi,
reducing spark.storage.memoryFraction did the trick for me. Heap doesn't
get filled because it is reserved..
My reasoning is:
I give executor all the memory i can give it, so that makes it a boundary.
From here i try to make the best use of memory I can.
storage.memoryFraction is in a sense
??(??) 4:11
??: Sea261810...@qq.com; useruser@spark.apache.org;
: rxinr...@databricks.com; joshrosenjoshro...@databricks.com;
daviesdav...@databricks.com;
: Re: About memory leak in spark 1.4.1
Hi,reducing spark.storage.memoryFraction did the trick for me. Heap doesn't get
...@databricks.com;
*主题:* Re: About memory leak in spark 1.4.1
http://spark.apache.org/docs/latest/tuning.html does mention
spark.storage.memoryFraction
in two places.
One is under Cache Size Tuning section.
FYI
On Sun, Aug 2, 2015 at 2:16 AM, Sea 261810...@qq.com wrote:
Hi, Barak
??: Sea261810...@qq.com;
: Barak Gitsisbar...@similarweb.com;
user@spark.apache.orguser@spark.apache.org; rxinr...@databricks.com;
joshrosenjoshro...@databricks.com; daviesdav...@databricks.com;
: Re: About memory leak in spark 1.4.1
http://spark.apache.org/docs/latest/tuning.html does
;
: user@spark.apache.orguser@spark.apache.org;
rxinr...@databricks.com; joshrosenjoshro...@databricks.com;
daviesdav...@databricks.com;
: Re: About memory leak in spark 1.4.1
spark uses a lot more than heap memory, it is the expected behavior.in 1.4
off-heap memory usage is supposed
Hi, all
I upgrage spark to 1.4.1, many applications failed... I find the heap memory is
not full , but the process of CoarseGrainedExecutorBackend will take more
memory than I expect, and it will increase as time goes on, finally more than
max limited of the server, the worker will die.
14 matches
Mail list logo