Is there something helium-related in the zeppelin logs once you start it ?
If there is some problem it should say.
Btw, once you open a notebook, if it has associated helium plugins,
zeppelin tries to download and install node and yarn on runtime, that makes
me wonder if it will works without inte
Hi
Do you know how can I change the folder path where the interpreters are
executed?.
The reason why I want to change that default location (which is
$ZEPPELIN_HOME) is because we are getting very large core dumps files in
that location when the interpreter process die.
As we are in a k8s ecosys
Hi, You can specify it in the zeppelin-env.sh, or in the Dockerfile.
Zeppelin will look for that variable first in the interpreter settings, and
if it does not find it, it will look for it on zeppelin environment
variables; so you can specify it in both sides, but as it does not change
frenquently
If you are using shiro, you also can check your config is in this way:
...
/** = anon
#/** = authc
El mié., 15 ago. 2018 a las 21:44, Jhon Anderson Cardenas Diaz (<
jhonderson2...@gmail.com>) escribió:
> Hi,
>
> Check if you have the file conf/zeppelin-site.xml and then vali
Hi,
Check if you have the file conf/zeppelin-site.xml and then validate that
the value of the property zeppelin.anonymous.allowed be 'true' (as default).
Regards.
El mié., 15 ago. 2018 a las 16:32, Mohit Jaggi ()
escribió:
> I downloaded Z 0.7.2 and started it on my mac. It is asking me to logi
las 20:07, Jeff Zhang () escribió:
>
> This is the first time I see user reporting this issue, what interpreter
> do you use ? Is it easy to reproduce ?
>
>
> Jhon Anderson Cardenas Diaz 于2018年8月3日周五
> 上午12:34写道:
>
>> Hi!
>>
>> Has someone else experimen
Hi!
Has someone else experimented this problem?:
Sometimes *when a paragraph is executed it shows random output from another
notebook* (from other users also).
We are using zeppelin 0.7.3 and Spark and all other interpreters are
configured in "Per User - Scoped" mode..
Regards.
Hi!.
Right now the Zeppelin starting time depends directly on the time it takes
to load the notebooks from the repository. If the user has a lot of
notebooks (ex more than 1000), the starting time starts to be too long.
Is there some plan to re implement this notebooks loading so that it is
done
Dear community,
Currently we are having problems with multiple users running paragraphs
associated with pyspark jobs.
The problem is that if an user aborts/cancels his pyspark paragraph (job),
the active pyspark jobs of the other users are canceled too.
Going into detail, I've seen that when you
Yes I did the sudoers configuration and i am using zeppelin user (not root)
to execute that command, the problem is that the command is executed using
sudo (*sudo* -E -H -u bash -c "...") so it will be executed as root
user anyways as i show you in ps aux results.
Regards.
2018-05-10 14:48 GMT-05
would think that is another security issue of this approach.What do you
think about it?
2018-05-09 12:53 GMT-05:00 Jhon Anderson Cardenas Diaz <
jhonderson2...@gmail.com>:
>
> -- Forwarded message -
> From: Sam Nicholson
> Date: mié., may. 9, 2018 12:04
>
ystem,
>> but zeppelin web can access the zeppelin executable. So, don't put this
>> up for untrusted users!!!
>>
>> Here is my zeppelin start script:
>> #!/bin/sh
>>
>> cd /var/www/zeppelin/home
>>
>> sudo -u zeppelin /opt/apache/zeppelin/zep
Dear Zeppelin Community,
Currently when a Zeppelin paragraph is executed, the code in it can read
sensitive config files, change them, including web app pages and etc. Like
in this example:
%python
f = open("/usr/zeppelin/conf/credentials.json", "r")
f.read()
Do you know if is there a way to con
Hi!
I am trying to implement a filter inside zeppelin in order to intercept the
petitions and collect metrics about zeppelin performance. I registered the
javax servlet filter in the zeppelin-web/src/WEB-INF/web.xml, and the
filter works well for the REST request; but it does not intercept the
Web
Hi,
The permission settings are stored in:
$ZEPPELIN_HOME/conf/notebook-authorization.json
The interpreter settings are stored in:
$ZEPPELIN_HOME/conf/interpreter.json
I think since zeppelin 0.8.0 exists a mechanism to persist the interpreters
configuration. If you work with an earlier version,
> September so not sure if you have that.
>
> Check out
> https://medium.com/@zjffdu/zeppelin-0-8-0-new-features-ea53e8810235 how
> to set this up.
>
>
>
> --
> Ruslan Dautkhanov
>
> On Tue, Mar 13, 2018 at 5:24 PM, Jhon Anderson Cardenas Diaz <
> jhonderson2
Hi zeppelin users !
I am working with zeppelin pointing to a spark in standalone. I am trying
to figure out a way to make zeppelin runs the spark driver outside of
client process that submits the application.
According with the documentation (
http://spark.apache.org/docs/2.1.1/spark-standalone.h
Hi fellow Zeppelin users.
I would like to know if is there a way in zeppelin to set interpreter
properties
that can not be changed by the user from the graphic interface.
An example use case in which this can be useful is if we want that zeppelin
users can not kill jobs from the spark ui; for thi
When you say you change the dependency, is only about the content? Or
content and version. I think the dependency should be reloaded only if its
version change.
I do not think it's optimal to re-download the dependencies every time the
interpreter reboots.
El 22 feb. 2018 05:22, "Partridge, Lucas
iple Spark UIs and on top
>>> of that maintaining the security and privacy in a shared multi-tenant env
>>> will need all the flexibility we can get!
>>>
>>> Thanks
>>> Ankit
>>>
>>> On Feb 1, 2018, at 7:51 PM, Jeff Zhang wrote:
>&
://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/HttpAuthentication.html
Regards
2018-01-10 19:38 GMT-05:00 Jeff Zhang :
>
> It seems by design of yarn mode. Have you ever make it work in spark-shell
> ?
>
>
> Jhon Anderson Cardenas Diaz 于2018年1月10日周三
> 下午9:17写道:
>
>
Hello!
I'm a software developer and as part of a project I require to extend the
functionality of SparkInterpreter without modifying it. I need instead
create a new interpreter that extends it or wrap its functionality.
I also need the spark sub-interpreters to use my new custom interpreter,
but
Hi fellow Zeppelin users,
I would like to create another implementation of
org.apache.zeppelin.notebook.repo.NotebookRepo interface in order to
persist the notebooks from zeppelin in S3 but in a versioned way (like a
Git on S3).
How do you recommend that i can add my jar file with the custom
impl
*Environment*:
AWS EMR, yarn cluster.
*Description*:
I am trying to use a java filter to protect the access to spark ui, this by
using the property spark.ui.filters; the problem is that when spark is
running on yarn mode, that property is being allways overriden with the
filter org.apache.hadoop.
24 matches
Mail list logo