I think the solution would be to distinguish between interpreter type and
interpreter instance.
The type should be relatively static, while the instance could be any
alias/name and only generate a warning when unable to match with entries in
interpreter.json. Finally the specific type would be adde
Hi,
For one, I know that there is rudimentary scheduling built into Zeppelin
already (at least I fixed a bug in the test for a scheduling feature a few
months ago).
But another point is, that Zeppelin should also focus on quality,
reproduceability and portability.
Although this doesn't offer excit
I would go the other way: The visualization and the back-end should not be
strictly part of the Zeppelin core, while a generic interface to manage
diverse pluggable backends though pluggable front-ends with a common core
of managing notebooks and data structure would go a long way towards
turning a
>From my experience, that will not work unless you -DskipTests, since the
tests require you have a dist-file available.
Even CI runs two builds at the moment.
So, if you need a tested (NB: tests will fail anyway, until ZEPPELIN-346
gets merged) distributable, you have to first clean package with
-D
ications in Spark Web UI.
>
> 1. will they have their own 'context' of execution in this case? If I
> understand, this would mean that closing a spark context in one user's
> zeppelin will have no impact on another user's zeppelin environment or its
> not true?
>
1)
Zeppelin uses the spark-shell REPL API. Therefore it behaves similarly to
the scala shell.
You do not write applications in the shell, in the technical sense, but
instead evaluate individual expressions with the goal of interacting with a
dataset.
You can (manually) export some of the code that
ead and raise that issue if you think it will help, although
> there is already the feature request at ZEPPELIN-169. Regardless of whether
> this is a feature request or a bug this renders Zeppelin pretty useless to
> me behind the corporate firewall…
>
>
>
> Thanks, Lucas.
&
, before opening an issue, maybe I'm just
"holding it wrong". I doubt an issue will push this along much faster
either, unless one of us actually submits a patch/PR to go along with it ;)
Best,
Rick
> *From:* Rick Moritz [mailto:rah...@gmail.com]
> *Sent:* 21 September 2015 15:1
Hello Lucas, hello list,
hopefully this message will thread properly.
This problem can actually be reproduced by the corresponding unit tests -
at least on my "disconnected" system, the corresponding tests for the
SparkInterpreter fail in exactly the same way as your code does. This is
also an is