Re: PySpark Interpretor not working

2015-07-07 Thread moon soo Lee
Hi, https://github.com/apache/incubator-zeppelin/pull/118 is recently merged and i think it helps configuring pyspark with Yarn. Please try latest branch and let me know if it helps. Thanks, moon On Tue, Jun 30, 2015 at 12:10 AM IT CTO wrote: > I have the same configuration but when I run the

Re: PySpark Interpretor not working

2015-06-30 Thread IT CTO
I have the same configuration but when I run the program the process never returns. status remain on running log say SEND >> PROGRESS forever On Sun, Jun 14, 2015 at 2:34 AM MrAsanjar . wrote: > hi > I had a similar issue, try these: > 1) add following settings to zeppelin-env.sh ( it must b

Re: PySpark Interpretor not working

2015-06-13 Thread MrAsanjar .
hi I had a similar issue, try these: 1) add following settings to zeppelin-env.sh ( it must be added there at this time) export MASTER=yarn-client export HADOOP_CONF_DIR=/etc/hadoop/conf export PYTHONPATH=/usr/lib/spark/python:/usr/lib/spark/python/lib/py4j-0.8.2.1-src.zip export SPARK_YARN_USER_EN

Re: PySpark Interpretor not working

2015-06-13 Thread moon soo Lee
Hi, Looks like your configuration okay, if you're not using yarn-client mode. Could you make sure your interpreter setting have spark.home property set to /home/biadmin/spark-1.3.0/spark-1.3.0-bin-hadoop2.4 ? Thanks, moon On Thu, Jun 11, 2015 at 12:14 PM Marcel Hofmann wrote: > Hey everybody,

PySpark Interpretor not working

2015-06-11 Thread Marcel Hofmann
Hey everybody, I'm currently testing Zeppelin, but unfortunatley, I can't really get it up and running. The example notebook is running just fine, and everything works there, but a simple pyspark statement like: %pyspark list = range(1,4) print(list) will not execute. Looking at the interpretor-