Hi Deb,
I think this may be the same issue as described in
https://issues.apache.org/jira/browse/SPARK-2121 . We know that the
container got killed by YARN because it used much more memory that it
requested. But we haven't figured out the root cause yet.
+Sandy
Best,
Xiangrui
On Tue, Aug 19,
I could reproduce the issue in both 1.0 and 1.1 using YARN...so this is
definitely a YARN related problem...
At least for me right now only deployment option possible is standalone...
On Tue, Aug 19, 2014 at 11:29 PM, Xiangrui Meng men...@gmail.com wrote:
Hi Deb,
I think this may be the
Hi Debasish,
The fix is to raise spark.yarn.executor.memoryOverhead until this goes
away. This controls the buffer between the JVM heap size and the amount of
memory requested from YARN (JVMs can take up memory beyond their heap
size). You should also make sure that, in the YARN NodeManager
Hi,
There have been some recent changes in the way akka is used in spark and I
feel they are major changes...
Is there a design document / JIRA / experiment on large datasets that
highlight the impact of changes (1.0 vs 1.1) ? Basically it will be great
to understand where akka is used in the
I just updated today's build and tried branch-1.1 for both yarn and
yarn-alpha.
For yarn build, this command seem to work fine.
sbt/sbt -Pyarn -Dhadoop.version=2.3.0-cdh5.0.1 projects
for yarn-alpha
sbt/sbt -Pyarn-alpha -Dhadoop.version=2.0.5-alpha projects
I got the following
Any ideas
Just tried on master branch, and the master branch works fine for yarn-alpha
On Wed, Aug 20, 2014 at 4:39 PM, Chester Chen ches...@alpinenow.com wrote:
I just updated today's build and tried branch-1.1 for both yarn and
yarn-alpha.
For yarn build, this command seem to work fine.
sbt/sbt