[ 
https://issues.apache.org/jira/browse/LIVY-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16928471#comment-16928471
 ] 

Naman Mishra commented on LIVY-325:
-----------------------------------

Is there a plan on supporting this? I can start a PR for supporting "named 
interpreter groups", i.e., multiple interpreter groups in a session sharing a 
spark context like the scoped mode in Zeppelin ( 
[https://zeppelin.apache.org/docs/0.8.0/usage/interpreter/interpreter_binding_mode.html#scoped-mode
 
|https://zeppelin.apache.org/docs/0.8.0/usage/interpreter/interpreter_binding_mode.html#scoped-mode])
 . The interpreter group can be specified on which the execution is supposed to 
happen.

> Refactoring Livy Session and Interpreter
> ----------------------------------------
>
>                 Key: LIVY-325
>                 URL: https://issues.apache.org/jira/browse/LIVY-325
>             Project: Livy
>          Issue Type: New Feature
>          Components: REPL
>    Affects Versions: 0.4.0
>            Reporter: Saisai Shao
>            Priority: Major
>
> Currently in Livy master code, Livy interpreter is bound with Livy session, 
> when we created a session, we should specify which interpreter we want, and 
> this interpreter will be created implicitly. This potentially has several 
> limitations:
> 1. We cannot create a share session, when we choose one language, we have to 
> create a new session to use. But some notebooks like Zeppelin could use 
> python, scala, R to manipulate data under the same SparkContext. So in Livy 
> we should decouple interpreter with SC and support shared context between 
> different interpreters.
> 2. Furthermore, we cannot create multiple same interpreters in one session. 
> For example in Zeppelin scope mode, it could create multiple scala 
> interpreters to share with one context, but unfortunately in current Livy we 
> could not support this.
> So based on the problems we mentioned above, we mainly have three things:
> 1. Decouple interpreters from Spark context, so that we could create multiple 
> interpreters under one context.
> 2. Make sure multiple interpreters could be worked together.
> 3. Change REST APIs to support multiple interpreters per session.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to