on: Main class [org.apache.oozie.action.hadoop.SparkMain],
>>> main() threw exception, PermGen space
>>> 2016-08-03 22:33:43,319 WARN SparkActionExecutor:523 -
>>> SERVER[ip-10-0-0-161.ec2.internal] USER[hadoop] GROUP[-] TOKEN[]
>>> APP[ApprouteOozie] JOB[031-160
6-08-03 22:33:43,319 WARN SparkActionExecutor:523 -
>> SERVER[ip-10-0-0-161.ec2.internal] USER[hadoop] GROUP[-] TOKEN[]
>> APP[ApprouteOozie] JOB[031-160803180548580-oozie-oozi-W]
>> ACTION[031-160803180548580-oozie-oozi-W@spark-approu
60803180548580-oozie-oozi-W]
> ACTION[0000031-160803180548580-oozie-oozi-W@spark-approute] Launcher
> exception: PermGen space
> java.lang.OutOfMemoryError: PermGen space
>
> oozie-oozi-W@spark-approute] Launcher exception: PermGen space
> java.lang.OutOfMemoryError: PermGe
[]
APP[ApprouteOozie] JOB[031-160803180548580-oozie-oozi-W]
ACTION[031-160803180548580-oozie-oozi-W@spark-approute] Launcher
exception: PermGen space
java.lang.OutOfMemoryError: PermGen space
oozie-oozi-W@spark-approute] Launcher exception: PermGen space
java.lang.OutOfMemory
Sorry to answer your question fully.
The job starts tasks and few of them fail and some are successful. The
failed one have that PermGen error in logs.
But ultimately full job is marked fail and session quits.
On Sun, Sep 13, 2015 at 10:48 AM, Jagat Singh wrote:
> Hi
Hi Davies,
This was first query on new version.
The one which ran successfully was Spark Pi example
./bin/spark-submit --class org.apache.spark.examples.SparkPi \
--master yarn-client \
--num-executors 3 \
--driver-memory 4g \
--executor-memory 2g \
--executor-cores 1 \
Did this happen immediately after you start the cluster or after ran
some queries?
Is this in local mode or cluster mode?
On Fri, Sep 11, 2015 at 3:00 AM, Jagat Singh wrote:
> Hi,
>
> We have queries which were running fine on 1.4.1 system.
>
> We are testing upgrade and
Hi,
We have queries which were running fine on 1.4.1 system.
We are testing upgrade and even simple query like
val t1= sqlContext.sql("select count(*) from table")
t1.show
This works perfectly fine on 1.4.1 but throws OOM error in 1.5.0
Are there any changes in default memory settings from
Have you seen this thread ?
http://search-hadoop.com/m/q3RTtPPuSvBu0rj2
> On Sep 11, 2015, at 3:00 AM, Jagat Singh wrote:
>
> Hi,
>
> We have queries which were running fine on 1.4.1 system.
>
> We are testing upgrade and even simple query like
> val t1=
Stati,
Change SPARK_REPL_OPTS to SPARK_SUBMIT_OPTS and try again. I faced the same
issue and making this change worked for me. I looked at the spark-shell
file under the bin dir and found SPARK_SUBMIT_OPTS being used.
SPARK_SUBMIT_OPTS=-XX:MaxPermSize=256m bin/spark-shell --master
=256m
as spark-shell input argument?
Roberto
On Wed, Jun 24, 2015 at 5:57 PM, stati srikanth...@gmail.com wrote:
Hello,
I moved from 1.3.1 to 1.4.0 and started receiving
java.lang.OutOfMemoryError: PermGen space when I use spark-shell.
Same Scala code works fine in 1.3.1 spark-shell
to pass it with
--driver-java-options -XX:MaxPermSize=256m
as spark-shell input argument?
Roberto
On Wed, Jun 24, 2015 at 5:57 PM, stati srikanth...@gmail.com wrote:
Hello,
I moved from 1.3.1 to 1.4.0 and started receiving
java.lang.OutOfMemoryError: PermGen space when I use spark
Hello,
I moved from 1.3.1 to 1.4.0 and started receiving
java.lang.OutOfMemoryError: PermGen space when I use spark-shell.
Same Scala code works fine in 1.3.1 spark-shell. I was loading same set of
external JARs and have same imports in 1.3.1.
I tried increasing perm size to 256m. I still got
Did you try to pass it with
--driver-java-options -XX:MaxPermSize=256m
as spark-shell input argument?
Roberto
On Wed, Jun 24, 2015 at 5:57 PM, stati srikanth...@gmail.com wrote:
Hello,
I moved from 1.3.1 to 1.4.0 and started receiving
java.lang.OutOfMemoryError: PermGen space when I
14 matches
Mail list logo