Hi Deenar,
It is possible to use Zeppelin Context via Pyspark interpreter.
Example (based on Zeppelin 0.6.0)
paragraph1
---
%spark
# do some stuff and store result (dataframe) into Zeppelin context. In this
case as sql dataframe
...
z.put("scala_df", scala_df: org.apache.spark.sql.
Hi
Is it possible to access Zeppelin context via the Pyspark interpreter. Not
all the method available via the Spark Scala interpreter seem to be
available in the Pyspark one (unless i am doing something wrong). I would
like to do something like this from the Pyspark interpreter.
z.show(df, 100)