This is a bug of zeppelin. spark.driver.memory won't take effect. As for now it 
isn't passed to spark through -conf parameter. See 
https://issues.apache.org/jira/browse/ZEPPELIN-1263
The workaround is to specify SPARK_DRIVER_MEMORY in interpreter setting page.



Best Regard,
Jeff Zhang


From: RUSHIKESH RAUT 
<[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Sunday, March 26, 2017 at 5:03 PM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: Zeppelin out of memory issue - (GC overhead limit exceeded)

ZEPPELIN_INTP_JAVA_OPTS

Reply via email to