Hi everyone,

I am trying to load some data from hive table into my notebook and then
convert this dataframe into r dataframe using spark.r interpreter. This
works perfectly for small amount of data.
But if the data is increased then it gives me error

java.lang.OutOfMemoryError: GC overhead limit exceeded

I have tried increasing the ZEPPELIN_MEM and ZEPPELIN_INTP_MEM in
the zeppelin-env.cmd file but i am still facing this issue. I have used the
following configuration

set ZEPPELIN_MEM="-Xms4096m -Xmx4096m -XX:MaxPermSize=2048m"
set ZEPPELIN_INTP_MEM="-Xmx4096m -Xms4096m -XX:MaxPermSize=2048m"

I am sure that this much size should be sufficient for my data but still i
am getting this same error. Any guidance will be much appreciated.

Thanks,
Rushikesh Raut

Reply via email to