I am ruuning streaming job. It is running in local,yarn-client mode but
when I am running it in yarn-cluster mode it is failing with PermGen space.
Can any one help me out.
Spark version : 1.5.0
Hadoop: 2.6.0
Java : 1.7
16/10/06 15:08:40 INFO scheduler.JobScheduler: Added jobs for time
147574672
I am running the spark job in yarn-client mode.
On Wed, Aug 3, 2016 at 8:14 PM, $iddhe$h Divekar wrote:
> Hi,
>
> I am running spark jobs using apache oozie.
> My job.properties has sparkConf which gets used in workflow.xml.
>
> I have tried increasing MaxPermSize using sparkConf in job.properti
Hi,
I am running spark jobs using apache oozie.
My job.properties has sparkConf which gets used in workflow.xml.
I have tried increasing MaxPermSize using sparkConf in job.properties
but that is not resolving the issue.
*sparkConf*=--verbose --driver-java-options '-XX:MaxPermSize=8192M' --conf
s
1.7
> >>
> >> I've written a scala program which
> >> => instantiates a spark and hive context
> >> => parses an XML file which provides the where clauses for queries
> >> => generates full fledged hive queries to be run on hive table
PUs and 12GB RAM.
>
> On Wed, Jul 29, 2015 at 2:49 PM, fightf...@163.com
> wrote:
>>
>> Hi, Sarath
>>
>> Did you try to use and increase spark.excecutor.extraJaveOptions
>> -XX:PermSize= -XX:MaxPermSize=
>>
>>
>> ___
--
> fightf...@163.com
>
>
> *From:* Sarath Chandra
> *Date:* 2015-07-29 17:39
> *To:* user@spark.apache.org
> *Subject:* PermGen Space Error
> Dear All,
>
> I'm using -
> => Spark 1.2.0
> => Hive 0.13.1
> => Mesos 0.18.1
> => Spring
>
Hi, Sarath
Did you try to use and increase spark.excecutor.extraJaveOptions -XX:PermSize=
-XX:MaxPermSize=
fightf...@163.com
From: Sarath Chandra
Date: 2015-07-29 17:39
To: user@spark.apache.org
Subject: PermGen Space Error
Dear All,
I'm using -
=> Spark 1.2.0
=> Hive 0.13.
ram to create some useful value objects using
input parameters and properties files and then calls the above scala
program).
*I'm getting PermGen Space error when it hits the last line to print the
count.*
I'm printing to console the generated hive queries from the scala program.
Wh