Sorry to answer your question fully.

The job starts tasks and few of them fail and some are successful. The
failed one have that PermGen error in logs.

But ultimately full job is marked fail and session quits.


On Sun, Sep 13, 2015 at 10:48 AM, Jagat Singh <jagatsi...@gmail.com> wrote:

> Hi Davies,
>
> This was first query on new version.
>
> The one which ran successfully was Spark Pi example
>
> ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
>
>     --master yarn-client \
>
>     --num-executors 3 \
>
>     --driver-memory 4g \
>
>     --executor-memory 2g \
>
>     --executor-cores 1 \
>
>     --queue default \
>
>     lib/spark-examples*.jar \
>
>     10
>
> Then i tried using spark-shell , which was started without any extra
> memory Grabage collection or Permgen configurations
>
> ./bin/spark-shell --num-executors 2 --executor-memory 512m --master
> yarn-client
>
> val t1= sqlContext.sql("select count(*) from table")
>
> t1.show
>
> This one fails with PermGen
>
> I will try on Monday the solution suggested about passing extra PermGen to
> driver.
>
> Thanks,
>
> On Sat, Sep 12, 2015 at 2:57 AM, Davies Liu <dav...@databricks.com> wrote:
>
>> Did this happen immediately after you start the cluster or after ran
>> some queries?
>>
>> Is this in local mode or cluster mode?
>>
>> On Fri, Sep 11, 2015 at 3:00 AM, Jagat Singh <jagatsi...@gmail.com>
>> wrote:
>> > Hi,
>> >
>> > We have queries which were running fine on 1.4.1 system.
>> >
>> > We are testing upgrade and even simple query like
>> >
>> > val t1= sqlContext.sql("select count(*) from table")
>> >
>> > t1.show
>> >
>> > This works perfectly fine on 1.4.1 but throws OOM error in 1.5.0
>> >
>> > Are there any changes in default memory settings from 1.4.1 to 1.5.0
>> >
>> > Thanks,
>> >
>> >
>> >
>>
>
>

Reply via email to