I am thinking about a new idea to propose a common in-memory shared storage
so that all interpreters can pass around variables. I'll create a JIRA soon
to submit the idea and the architecture

On Thu, Oct 29, 2015 at 5:40 PM, moon soo Lee <m...@apache.org> wrote:

> Hi,
>
> If your custom interpreter is in the same interpreter group 'spark', you
> can exchange data between SparkInterpreter and your custom interpreter.
> (because of interpreters in the same group runs in the same process)
>
> But if your custom interpreter is in the different interpreter group, then
> only way at the moment is persist data from SparkInterpreter and read data
> in your custom interpreter.
>
> Thanks,
> moon
>
>
> On Thu, Oct 29, 2015 at 11:07 AM Miyuru Dayarathna <miyu...@yahoo.co.uk>
> wrote:
>
>> Hi,
>>
>> I am trying to access the Spark data frame defined in the Zeppelin
>> Tutorial notebook from a separate paragraph using a custom written Zeppelin
>> Interpreter. To make it more clear given below is the code snippet from
>> "Load data into table" paragraph of the Zeppelin Tutorial notebook. When
>> this is run the data frame called "bank" gets initialized in a Spark
>> Interpreter process. I want to use the bank data frame from my custom
>> Zeppelin interpreter. Can you please let me know how to do this? Is there a
>> Zeppelin API which provides me the access to such variables running in a
>> different Interpreter than where they were instantiated?
>>
>> //----------------------------------
>>
>> val bank = bankText.map(s => s.split(";")).filter(s => s(0) !=
>> "\"age\"").map(
>>     s => Bank(s(0).toInt,
>>             s(1).replaceAll("\"", ""),
>>             s(2).replaceAll("\"", ""),
>>             s(3).replaceAll("\"", ""),
>>             s(5).replaceAll("\"", "").toInt
>>         )
>> ).toDF()
>>
>> bank.registerTempTable("bank")
>>
>> //----------------------------------
>>
>> Thanks,
>> Miyuru
>>
>

Reply via email to