Hello,

I have created a custom interpreter that collects data from a service with
a custom query language, and I would like to be able to use this data with
existing interpreters in Zeppelin, like the spark interpreters. Basically
the scenario I'm imagining is the custom interpreter runs, formats the data
into a data frame/RDD, injects the data collected into the context, and
then subsequent paragraphs have interpreters from the spark group that
process this data further. This is similar to what happens in the "Zeppelin
Tutorial/Basic Features (Spark)" notebook where scala code creates some
data, uses "registerTempTable" to put the data into the spark context, and
then this data can be used in SQL scripts in later paragraphs.

How can I accomplish this? Is there a simple solution involving calling
something like "registerTempTable" in the custom interpreter and then run
the other interpreters normally below as the tutorial does?

Thank you for any guidance.

Reply via email to