Thanks Piyush. Do we have any ETA for this to be sent for review?
Dimple
On Wed, Jan 13, 2016 at 6:23 PM, Piyush Mukati (Data Platform) <
piyush.muk...@flipkart.com> wrote:
> Hi,
> The code is available here
>
> https://github.com/piyush-mukati/incubator-zeppelin/tree/parallel_scheduler_support
Hi,
The code is available here
https://github.com/piyush-mukati/incubator-zeppelin/tree/parallel_scheduler_support_spark
some testing part is left.
On Wed, Jan 13, 2016 at 11:47 PM, Dimp Bhat wrote:
> Hi Pranav,
> When do you plan to send out the code for running notebooks in parallel ?
>
> D
Hi Pranav,
When do you plan to send out the code for running notebooks in parallel ?
Dimple
On Tue, Nov 17, 2015 at 3:27 AM, Pranav Kumar Agarwal
wrote:
> Hi Rohit,
>
> We implemented the proposal and are able to run Zeppelin as a hosted
> service inside my organization. Our internal forked ver
Hi Rohit,
We implemented the proposal and are able to run Zeppelin as a hosted
service inside my organization. Our internal forked version has
pluggable authentication and type ahead.
I need to get the work ported to the latest and chop out the auth
changes portion. We'll be submitting it so
Hey Pranav,
Did you make any progress on this?
--
Rohit
On Sunday, August 16, 2015, moon soo Lee wrote:
> Pranav, proposal looks awesome!
>
> I have a question and feedback,
>
> You said you tested 1,2 and 3. To create SparkIMain per notebook, you need
> information of notebook id. Did you get
It had nothing to do the changes related to completion code. The issue
was reproducible on master also.
Its due to the recent fix for ZEPPELIN-173
On one of our environment the hostname didn't returned the domain name
after the hostname, however since the query coming from the browser
included
Hi,
I'm not sure what could be wrong.
can you see any existing notebook?
Best,
moon
On Mon, Aug 31, 2015 at 8:48 PM Piyush Mukati (Data Platform) <
piyush.muk...@flipkart.com> wrote:
> Hi,
> we have passed the InterpreterContext to completion() , it is working
> good on my local dev setup.
> b
Hi,
we have passed the InterpreterContext to completion() , it is working good
on my local dev setup.
but after
mvn clean package -P build-distr -Pspark-1.4 -Dhadoop.version=2.6.0
-Phadoop-2.6 -Pyarn
I copied zeppelin-0.6.0-incubating-SNAPSHOT.tar.gz to some other machine,
while running from th
Hi Pranav,
Thanks for sharing the plan.
I think passing InterpreterContext to completion() make sense.
Although it changes interpreter api, changing now looks better than later.
Thanks.
moon
On Tue, Aug 25, 2015 at 11:22 PM Pranav Kumar Agarwal
wrote:
> Hi Moon,
>
> > I think releasing SparkI
Hi Moon,
I think releasing SparkIMain and related objects
By packaging I meant to ask what is the process to "release SparkIMain
and related objects"? for Zeppelin's code uptake?
I have one more question:
Most the changes to allow SparkInterpreter support ParallelScheduler are
implemented bu
Could you explain little bit more about package changes you mean?
Thanks,
moon
On Mon, Aug 17, 2015 at 10:27 AM Pranav Agarwal wrote:
> Any thoughts on how to package changes related to Spark?
> On 17-Aug-2015 7:58 pm, "moon soo Lee" wrote:
>
>> I think releasing SparkIMain and related objects
Any thoughts on how to package changes related to Spark?
On 17-Aug-2015 7:58 pm, "moon soo Lee" wrote:
> I think releasing SparkIMain and related objects after configurable
> inactivity would be good for now.
>
> About scheduler, I can help implementing such scheduler.
>
> Thanks,
> moon
>
> On S
I think releasing SparkIMain and related objects after configurable
inactivity would be good for now.
About scheduler, I can help implementing such scheduler.
Thanks,
moon
On Sun, Aug 16, 2015 at 11:54 PM Pranav Kumar Agarwal
wrote:
> Hi Moon,
>
> Yes, the notebookid comes from InterpreterCont
Hi Moon,
Yes, the notebookid comes from InterpreterContext. At the moment
destroying SparkIMain on deletion of notebook is not handled. I think
SparkIMain is a lightweight object, do you see a concern having these
objects in a map? One possible option could be to destroy notebook
related obje
Pranav, proposal looks awesome!
I have a question and feedback,
You said you tested 1,2 and 3. To create SparkIMain per notebook, you need
information of notebook id. Did you get it from InterpreterContext?
Then how did you handle destroying of SparkIMain (when notebook is
deleting)?
As far as i
+1 for "to re-factor the Zeppelin architecture so that it can handle
multi-tenancy easily"
On Sun, Aug 16, 2015 at 9:47 AM DuyHai Doan wrote:
> Agree with Joel, we may think to re-factor the Zeppelin architecture so
> that it can handle multi-tenancy easily. The technical solution proposed by
>
Agree with Joel, we may think to re-factor the Zeppelin architecture so
that it can handle multi-tenancy easily. The technical solution
proposed by Pranav
is great but it only applies to Spark. Right now, each interpreter has to
manage multi-tenancy its own way. Ultimately Zeppelin can propose a
mu
If the problem is that multiple users have to wait for each other while
using Zeppelin, the solution already exists: they can create a new
interpreter by going to the interpreter page and attach it to their
notebook - then they don't have to wait for others to submit their job.
But I agree, having
If someone can share about the idea of sharing single SparkContext
through multiple SparkILoop safely, it'll be really helpful.
Here is a proposal:
1. In Spark code, change SparkIMain.scala to allow setting the virtual
directory. While creating new instances of SparkIMain per notebook from
zepp
Hi piyush,
Separate instance of SparkILoop SparkIMain for each notebook while sharing
the SparkContext sounds great.
Actually, i tried to do it, found problem that multiple SparkILoop could
generates the same class name, and spark executor confuses classname since
they're reading classes from sin
Hi Moon,
Any suggestion on it, have to wait lot when multiple people working
with spark.
Can we create separate instance of SparkILoop SparkIMain and
printstrems for each notebook while sharing the SparkContext
ZeppelinContext SQLContext and DependencyResolver and then use
parallel scheduler
Hi Moon,
How about tracking dedicated SparkContext for a notebook in Spark's
remote interpreter - this will allow multiple users to run their spark
paragraphs in parallel. Also, within a notebook only one paragraph is
executed at a time.
Regards,
-Pranav.
On 15/07/15 7:15 pm, moon soo Lee
A quite significant side effects about the mechod i mentioned before was
that it will create a lots of RemoteInterpreterServer by different note
with different Interpreter settings. And i notice that
the RemoteInterpreterServer will never stop after it started except
the Interpreter
setting was ch
Get it ! Thanks for answering my question, i was quite clear about it after
reading the code. And i think this ok for one person that create a single
note to write and run his code.
I'm planning to create a zeppelin server and share it to all the RDs in our
company. To make sure they can parallel r
We may introduce some paragraph dependency system to remove this limitation.
Indeed, in the InterpreterContext object we can introduce dependency link
between different paragraphs so that we guarantee a correct ordering of
execution
It would imply updating the AngularJS code to add a config secti
Hi,
Thanks for asking question.
The reason is simply because of it is running code statements. The
statements can have order and dependency. Imagine i have two paragraphs
%spark
val a = 1
%spark
print(a)
If they're not running one by one, that means they possibly runs in random
order and the o
any one who have the same question with me? or this is not a question?
2015-07-14 11:47 GMT+08:00 linxi zeng :
> hi, Moon:
>I notice that the getScheduler function in the SparkInterpreter.java
> return a FIFOScheduler which makes the spark interpreter run spark job one
> by one. It's not a go
hi, Moon:
I notice that the getScheduler function in the SparkInterpreter.java
return a FIFOScheduler which makes the spark interpreter run spark job one
by one. It's not a good experience when couple of users do some work on
zeppelin at the same time, because they have to wait for each other.
A
28 matches
Mail list logo