Battula
Cc: Akhil Das; user@spark.apache.org
Subject: Re: Running beyond physical memory limits
This is not related to executor memory, but the extra overhead
subtracted from the executor's size in order to avoid using more than
the physical memory that YARN allows. That is, if you declare
=container_1429065217137_0012_01_-411041790] is running
beyond physical memory limits. Current usage: 26.0 GB of 26 GB physical memory
used; 26.7 GB of 260 GB virtual memory used. Killing container.
Dump of the process-tree for container_1429065217137_0012_01_-411041790 :
|- PID PPID PGRPID SESSID CMD_NAME
Reddy Battula
From: Akhil Das [ak...@sigmoidanalytics.com]
Sent: Wednesday, April 15, 2015 2:35 PM
To: Brahma Reddy Battula
Cc: user@spark.apache.org
Subject: Re: Running beyond physical memory limits
Did you try reducing your spark.executor.memory
| Container Monitor | Container
[pid=126843,containerID=container_1429065217137_0012_01_-411041790] is running
beyond physical memory limits. Current usage: 26.0 GB of 26 GB physical memory
used; 26.7 GB of 260 GB virtual memory used. Killing container.
Dump of the process-tree
is getting killed..I seen
SPARK-1930 and it should be in 1.2..
*Any pointer to following error, like what might lead this error.*.
2015-04-15 11:55:39,697 | WARN | Container Monitor | Container
[pid=126843,containerID=container_1429065217137_0012_01_-411041790] is
running beyond physical
=container_1429065217137_0012_01_-411041790] is
running beyond physical memory limits. Current usage: 26.0 GB of 26 GB
physical memory used; 26.7 GB of 260 GB virtual memory used. Killing
container.
Dump of the process-tree for container_1429065217137_0012_01_-411041790 :
|- PID PPID PGRPID SESSID