Hi
   might be off topic, but databricks has a web application in whicn you
can use spark with jupyter. have a look at
https://community.cloud.databricks.com

kr

On Thu, Jan 5, 2017 at 7:53 PM, Jon G <jonrgr...@gmail.com> wrote:

> I don't use MapR but I use pyspark with jupyter, and this MapR blogpost
> looks similar to what I do to setup:
>
> https://community.mapr.com/docs/DOC-1874-how-to-use-
> jupyter-pyspark-on-mapr
>
>
> On Thu, Jan 5, 2017 at 3:05 AM, neil90 <neilp1...@icloud.com> wrote:
>
>> Assuming you don't have your environment variables setup in your
>> .bash_profile you would do it like this -
>>
>> import os
>> import sys
>>
>> spark_home = '/usr/local/spark'
>> sys.path.insert(0, spark_home + "/python")
>> sys.path.insert(0, os.path.join(spark_home,
>> 'python/lib/py4j-0.10.1-src.zip'))
>> #os.environ['PYSPARK_SUBMIT_ARGS'] = """--master spark://
>> 54.68.147.137:7077
>> pyspark-shell""" <---- where you can pass commands you would pass in
>> launching pyspark directly from command line
>>
>> from pyspark import SparkContext, SparkConf
>> from pyspark.sql import SparkSession
>>
>> conf = SparkConf()\
>>     .setMaster("local[8]")\
>>     .setAppName("Test")
>>
>> sc = SparkContext(conf=conf)
>>
>> spark = SparkSession.builder\
>>     .config(conf=sc.getConf())\
>>     .enableHiveSupport()\
>>     .getOrCreate()
>>
>> Mind you this is for spark 2.0 and above
>>
>>
>>
>> --
>> View this message in context: http://apache-spark-user-list.
>> 1001560.n3.nabble.com/Spark-Python-in-Jupyter-Notebook-tp28268p28274.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>
>

Reply via email to