Re: Store CSV file with Notebook?

2018-11-27 Thread Spico Florin
Hi!
  I have created a volume for the the docker container and put the data in
that volume. I'm using docker compose and the docker compose for zeppelin
looks like this

interactive-analytics:

build: interactive-analytics
container_name: "zeppelin-analytics"
environment:

  - KAFKA_BROKER=kafka-conn:9092

ports:

  - "8080:8080"

volumes:

  - './interactive-analytics/resources/conf:/zeppelin/conf'

  - './interactive-analytics/resources/notebook:/zeppelin/notebook'

  - './interactive-analytics/resources/data:/data' #this is the part
where I'm handling my data. In notebook I'm referring to /data/my.csv

Please check the docker documentation on how to run your docker container
with an attached volume.

I hope it helps.
Florin



On Fri, Nov 2, 2018 at 2:40 PM Eric Pugh 
wrote:

> I am using the %python interpreter, and I want to write out a .csv file.
> Can I do that and store it with my Zeppelin notebook?
>
> I’m running Zeppelin in a Docker compose script, so I could also mount a
> “data dir” of some variety….
>
>
>
> Eric
>
> ___
> *Eric Pugh **| *Founder & CEO | OpenSource Connections, LLC | 434.466.1467
> | http://www.opensourceconnections.com | My Free/Busy
> 
> Co-Author: Apache Solr Enterprise Search Server, 3rd Ed
> 
> This e-mail and all contents, including attachments, is considered to be
> Company Confidential unless explicitly stated otherwise, regardless
> of whether attachments are marked as such.
>
>


Manage Flink job third party libraries with Zeppelin on a Flink cluster

2018-11-27 Thread Spico Florin
Hello!

I'm using Zeppelin 0.7.3 with Flink 1.4.2 in cluster mode.
My Flink job  has dependencies on third party libraries (Flink CEP, jackson
json etc) and when I run the notebook, I got ClassNotFoundException on the
Flink Task side, even though I have configured the Flink Interpreter
dependencies on the mentioned libraries.

This is fine since Flink has dependencies only on the core libraries (
https://ci.apache.org/projects/flink/flink-docs-stable/start/dependencies.html
).
Unfortunately, by default Zeppelin Flink Interpreter doesn't pack all the
dependencies on the submitted jar file thus we got the exception
ClassNotFoundException

Therefore, I would like to ask you how can I configure the Flink
Interpreter to send to the Flink cluster all the required third party
dependencies?
Is there any similar environment variable like SPARK_SUBMIT_OPTIONS?

I look forward for you answers.

Regards,
 Florin


Re: Available and custom roles

2018-10-26 Thread Spico Florin
Hello!
 Thank you for your responses. Is still not clear for me, how to add
different (zeppelin actions?) with the help of the roles?
In the example provided by the liuxun there is no difference between the
two roles. Both of them have *.
If I'not using LDAP just basic shiro configuration what could be other
options?
Thanks.
 Florin

On Fri, Oct 26, 2018 at 3:49 PM Fawze Abujaber  wrote:

> What others choices can be used for * in the roles?
>
> I configured zeppelin to work with AD and yes i'm able to differeniate
> between the 2 groups in the ADrolegroupmap?
>
> For example i have 2 groups zeppelin_admins and zeppelin_members.
>
> And when keeping url section as is, then admins will have accrss the
> mentioned urls and members not, but how i can disabled other users to
> authinticate at all.
>
> for now our AD user are able to authnticate and Access the UI.
>
> On Fri, Oct 26, 2018 at 3:44 PM liuxun  wrote:
>
>> You can refer to the following configuration:
>>
>> [users]
>> # List of users with their password allowed to access Zeppelin.
>> # To use a different strategy (LDAP / Database / ...) check the shiro doc
>> at http://shiro.apache.org/configuration.html#Configuration-INISections
>> # To enable admin user, uncomment the following line and set an
>> appropriate password.
>> admin = password1, admin
>> user1 = password1, bi
>> user2 = password2, bi
>> user3 = password3, bi
>>
>>
>> [roles]
>> bi = *
>> admin = *
>>
>> [urls]
>> # This section is used for url-based security.
>> # You can secure interpreter, configuration and credential information by
>> urls. Comment or uncomment the below urls that you want to hide.
>> # anon means the access is anonymous.
>> # authc means Form based Auth Security
>> # To enfore security, comment the line below and uncomment the next one
>> /api/version = anon
>> /api/openid/* = anon
>> /api/interpreter/** = authc, roles[admin]
>> /api/configurations/** = authc, roles[admin]
>> /api/credential/** = authc, roles[admin]
>>
>>
>> 在 2018年10月26日,下午7:40,Spico Florin  写道:
>>
>> Hello!
>>
>> I would like to know what are the available roles in Zeppelin (besides
>> admin that has *).
>> How can I create/define my own roles based on the actions that an user is
>> allowed.
>> In the shiro.ini the examples are to generic, having role1, role2 all
>> action allowed *.
>>
>> Can you please define the fine grained action that I can add in arole?
>>
>> I look forward for your answers.
>> Best regards,
>>  Florin
>>
>>
>>
>
> --
> Take Care
> Fawze Abujaber
>


Available and custom roles

2018-10-26 Thread Spico Florin
Hello!

I would like to know what are the available roles in Zeppelin (besides
admin that has *).
How can I create/define my own roles based on the actions that an user is
allowed.
In the shiro.ini the examples are to generic, having role1, role2 all
action allowed *.

Can you please define the fine grained action that I can add in arole?

I look forward for your answers.
Best regards,
 Florin


Re: Run/install tensorframes on zeppelin pyspark

2018-08-10 Thread Spico Florin
Hello!
  Thank you very much for your response.
As I understood, in order to use tensorframes in Zeppelin pyspark notebook
with spark master locally
1. we should run command pip install tensorframes
2. we should set up the PYSPARK_PYTHON in conf/zeppelin-env.sh

I have performed the above steps like this

python2.7 -m pip install tensorframes==0.2.7
export PYSPARK_PYTHON=python2.7 in  in conf/zeppelin-env.sh
"zeppelin.pyspark.python": "python2.7 in conf/interpreter.json

As you can see the installation and the configurations refers to the same
python2.7 version.
After performing all of these steps, I'm still getting the same error
 *"ImportError:
No module named tensorframes"*

I'm still puzzled how this import works fine in the pyspark command from
the spark and for example in python2.7 results in errors.
Also I've observed that pyspark shell from /spark/bin doesn't need the
tensorframes python package installed and this is more confusing.
Zeppelin pyspark interpreter is not using the same approach as spark
pyspark shell?

Is someone succeeded to import/use correctly tensorframes in Zeppelin with
default spark master setup (local[*]?) If yes how?

I look forward for your answers/

Regards,
 Florin

















On Thu, Aug 9, 2018 at 3:52 AM, Jeff Zhang  wrote:

>
> Make sure you use the correct python which has tensorframe installed.  Use 
> PYSPARK_PYTHON
> to configure the python
>
>
>
> Spico Florin 于2018年8月8日周三 下午9:59写道:
>
>> Hi!
>>
>> I would like to use tensorframes in my pyspark notebook.
>>
>> I have performed the following:
>>
>> 1. In the spark intepreter adde a new repository http://dl.bintray.
>> com/spark-packages/maven
>> 2. in the spark interpreter added the dependency databricks:
>> tensorframes:0.2.9-s_2.11
>> 3. pip install tensorframes
>>
>>
>> In both 0.7.3 and 0.8.0:
>> 1.  the following code resulted in error: "ImportError: No module named
>> tensorframes"
>>
>> %pyspark
>> import tensorframes as tfs
>>
>> 2. the following code succeeded
>> %spark
>> import org.tensorframes.{dsl => tf}
>> import org.tensorframes.dsl.Implicits._
>> val df = spark.createDataFrame(Seq(1.0->1.1, 2.0->2.2)).toDF("a", "b")
>>
>> // As in Python, scoping is recommended to prevent name collisions.
>> val df2 = tf.withGraph {
>> val a = df.block("a")
>> // Unlike python, the scala syntax is more flexible:
>> val out = a + 3.0 named "out"
>> // The 'mapBlocks' method is added using implicits to dataframes.
>> df.mapBlocks(out).select("a", "out")
>> }
>>
>> // The transform is all lazy at this point, let's execute it with collect:
>> df2.collect()
>>
>> I ran the code above directly with spark interpreter with the default
>> configurations (master set up to local[*] - so not via spark-submit
>> command) .
>>
>> Also, I have installed spark home locally and ran the command
>>
>> $SPARK_HOME/bin/pyspark --packages databricks:tensorframes:0.2.9-s_2.11
>>
>> and the code below worked as expcted
>>
>> import tensorframes as tfs
>>
>>  Can you please help to solve this?
>>
>> Thanks,
>>
>>  Florin
>>
>>
>>
>>
>>
>>
>>
>>
>>


Run/install tensorframes on zeppelin pyspark

2018-08-08 Thread Spico Florin
Hi!

I would like to use tensorframes in my pyspark notebook.

I have performed the following:

1. In the spark intepreter adde a new repository
http://dl.bintray.com/spark-packages/maven
2. in the spark interpreter added the
dependency databricks:tensorframes:0.2.9-s_2.11
3. pip install tensorframes


In both 0.7.3 and 0.8.0:
1.  the following code resulted in error: "ImportError: No module named
tensorframes"

%pyspark
import tensorframes as tfs

2. the following code succeeded
%spark
import org.tensorframes.{dsl => tf}
import org.tensorframes.dsl.Implicits._
val df = spark.createDataFrame(Seq(1.0->1.1, 2.0->2.2)).toDF("a", "b")

// As in Python, scoping is recommended to prevent name collisions.
val df2 = tf.withGraph {
val a = df.block("a")
// Unlike python, the scala syntax is more flexible:
val out = a + 3.0 named "out"
// The 'mapBlocks' method is added using implicits to dataframes.
df.mapBlocks(out).select("a", "out")
}

// The transform is all lazy at this point, let's execute it with collect:
df2.collect()

I ran the code above directly with spark interpreter with the default
configurations (master set up to local[*] - so not via spark-submit
command) .

Also, I have installed spark home locally and ran the command

$SPARK_HOME/bin/pyspark --packages databricks:tensorframes:0.2.9-s_2.11

and the code below worked as expcted

import tensorframes as tfs

 Can you please help to solve this?

Thanks,

 Florin


Re: [ANNOUNCE] Apache Zeppelin 0.8.0 released

2018-06-29 Thread Spico Florin
Hi!
  I tried to get the docker image for this version 0.8.0, but it seems that
is not in the official docker hub repository:
https://hub.docker.com/r/apache/zeppelin/tags/ there is no such as version
0.8.0
Also, the commands
 docker pull apache/zeppelin:0.8.0
or

docker run -p 8080:8080 --rm --name zeppelin apache/zeppelin:0.8.0


fails with
Error response from daemon: manifest for apache/zeppelin:0.8.0 not found

Can you please check? Or how should I get this version for docker (please
instruct).

Thanks.
Regards,
 Florin



On Fri, Jun 29, 2018 at 6:13 AM, Jongyoul Lee  wrote:

> Great work!!
>
> On Fri, Jun 29, 2018 at 9:49 AM, Jeff Zhang  wrote:
>
>> Thanks Patrick, I have fixed the broken link.
>>
>>
>>
>> Patrick Maroney 于2018年6月29日周五 上午7:13写道:
>>
>> > Install guides:
>> >
>> > http://zeppelin.apache.org/docs/0.8.0/install/install.html
>> > Not Found
>> >
>> > The requested URL /docs/0.8.0/install/install.html was not found on
>> this
>> > server.
>> >
>> > http://zeppelin.apache.org/docs/0.8.0/manual/interpreterinst
>> allation.html
>> >
>> > Not Found
>> >
>> > The requested URL /docs/0.8.0/manual/interpreterinstallation.html was
>> not
>> > found on this server.
>> >
>> >
>> > Patrick Maroney
>> > Principal Engineer - Data Science & Analytics
>> > Wapack Labs
>> >
>> >
>> > On Jun 28, 2018, at 6:59 PM, Jianfeng (Jeff) Zhang <
>> jzh...@hortonworks.com>
>> > wrote:
>> >
>> > Hi Patrick,
>> >
>> > Which link is broken ? I can access all the links.
>> >
>> > Best Regard,
>> > Jeff Zhang
>> >
>> >
>> > From: Patrick Maroney 
>> > Reply-To: 
>> > Date: Friday, June 29, 2018 at 4:59 AM
>> > To: 
>> > Cc: dev 
>> > Subject: Re: [ANNOUNCE] Apache Zeppelin 0.8.0 released
>> >
>> > Great work Team/Community!
>> >
>> > Links on the main download page are broken:
>> >
>> > http://zeppelin.apache.org/download.html
>> >
>> > ...at least the ones I need ;-)
>> >
>> > *Patrick Maroney*
>>
>> > Principal Engineer - Data Science & Analytics
>> > Wapack Labs LLC
>> >
>> >
>> > Public Key: http://pgp.mit.edu/pks/lookup?
>> op=get&search=0x7C810C9769BD29AF
>> >
>> > On Jun 27, 2018, at 11:21 PM, Prabhjyot Singh > >
>> > wrote:
>> >
>> > Awesome! congratulations team.
>> >
>> >
>> >
>> > On Thu 28 Jun, 2018, 8:39 AM Taejun Kim,  wrote:
>> >
>> >> Awesome! Thanks for your great work :)
>> >>
>> >> 2018년 6월 28일 (목) 오후 12:07, Jeff Zhang 님이 작성:
>> >>
>> >>> The Apache Zeppelin community is pleased to announce the availability
>> of
>> >>> the 0.8.0 release.
>> >>>
>> >>> Zeppelin is a collaborative data analytics and visualization tool for
>> >>> distributed, general-purpose data processing system such as Apache
>> Spark,
>> >>> Apache Flink, etc.
>> >>>
>> >>> This is another major release after the last minor release 0.7.3.
>> >>> The community put significant effort into improving Apache Zeppelin
>> since
>> >>> the last release. 122 contributors fixed totally 602 issues. Lots of
>> >>> new features are introduced, such as inline configuration, ipython
>> >>> interpreter, yarn-cluster mode support , interpreter lifecycle manager
>> >>> and etc.
>> >>>
>> >>> We encourage you to download the latest release
>> >>> fromhttp://zeppelin.apache.org/download.html
>> >>>
>> >>> Release note is available
>> >>> athttp://zeppelin.apache.org/releases/zeppelin-release-0.8.0.html
>> >>>
>> >>> We welcome your help and feedback. For more information on the project
>> >>> and
>> >>> how to get involved, visit our website at http://zeppelin.apache.org/
>> >>>
>> >>> Thank you all users and contributors who have helped to improve Apache
>> >>> Zeppelin.
>> >>>
>> >>> Regards,
>> >>> The Apache Zeppelin community
>> >>>
>> >> --
>> >> Taejun Kim
>> >>
>> >> Data Mining Lab.
>> >> School of Electrical and Computer Engineering
>> >> University of Seoul
>> >>
>> >
>> >
>>
>
>
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>


How to track a zeppelin job for multiple job submission request

2018-05-01 Thread Spico Florin
Hello!

 I have a zeppelin notebook that I would like to be exposed as a REST
service to multiple users.
A user can request multiple times the results from the REST service backed
by zeppelin.

I would like the calls to service to be asynchronous and to use the async
api

https://zeppelin.apache.org/docs/0.7.3/rest-api/rest-notebook.html#run-all-paragraphs
or
https://zeppelin.apache.org/docs/0.7.3/rest-api/rest-notebook.html#run-a-paragraph-asynchronously

The calls are performed from a web browser. The results from the service
are put in a kafka topic and sent back to the client via web socket.

The flow is like this:
 User (web browser)->Zeppelin REST job (use Spark interpreter)->Kafka
topic->Socket.io(kafka web socket plugin)->Web browser

Due to the fact that the mentioned REST API doesn't provide the job id:
- how can I successfully return back  the results to the client requester?
-how can I implement such a described workflow?
-how can I distinguish between client requests?

I look forward for your answers.

Best regards,
  Florin


Re: Manually import notes via copy notebook folder

2018-04-27 Thread Spico Florin
Hello!
  Thank you all. It worked. I had a problem that the notebook files were
not copied in the proper location.

Best regards,
  Florin

On Fri, Apr 27, 2018 at 4:46 PM, Mohit Jaggi  wrote:

> Restart Z. And wait a min or two before checking.
>
> On Fri, Apr 27, 2018 at 6:45 AM, Spico Florin 
> wrote:
>
>> Hi!
>>   Thank you for your answers. I'm using Zeppelin 0.7.2 with the local
>> storage  GitNotebookRepo (org.apache.zeppelin.notebook.repo.GitNotebookRepo).
>> I did copy the conf folder and the notebook folder, but I still don't see
>> my notes in Zeppelin UI.
>> Am I missing something?
>> Florin
>>
>> On Fri, Apr 27, 2018 at 4:18 PM, Jeff Zhang  wrote:
>>
>>>
>>> It depends what NotebookRepo you use, If you use local disk to store
>>> notes (GitNotebookRepo and VFSNotebookRepo), then you have to copy the
>>> notebook folder. But if you use other remote
>>> storage (like S3, Azure, HDFS (0.8), then you just need to copy conf to
>>> make sure you use the correct configuration.
>>>
>>>
>>> Paul Brenner 于2018年4月27日周五 下午8:53写道:
>>>
>>>> Hopefully someone will jump in with a more specific answer but...
>>>> We do this whenever we update zeppelin to a new version. We copy over
>>>> both the notebook directory and the conf directory then restart. It works!
>>>> But, hopefully someone can be more specific about what you need in conf.
>>>>
>>>>
>>>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6aYBXp5Bwkj8Yt26o7V9tY2xG7CJFSVcmBI_S_yLn5nxiWLw=>
>>>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6aYBXp5Bwkj8Yt26o7V9tY2xG7CJFSVcmBI_S_yLn5nxiWLw=>
>>>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6aYBXp5Bwkj8Yt26o7V9tY2xG7CJFSVcmBI_S_yLn5nxiWLw=>
>>>>  *Paul
>>>> Brenner*
>>>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6IJVX_5Nw1TsAs3_j50EuL3NHolWhHkrBa-CAChfihKtLUbwhMLa3>
>>>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6IJVX_5Nw1TsAs3_j50EuL3NHolWhHkrBa-CAChfihKtLUbwhMLa3>
>>>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6IJVX_5Nw1TsAs3_j50EuL3NHolWhHkrBa-CAChfihKtLUbwhMLa3>
>>>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6IJVX_5Bwy2ESt26o5kEsay1IrFvrJ1f1LNnHncNPPgIs-aqnGWj7VBL4Uks=>
>>>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6IJVX_5Bwy2ESt26o5kEsay1IrFvrJ1f1LNnHncNPPgIs-aqnGWj7VBL4Uks=>
>>>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6IJVX_5Bwy2EYv2Om4Uoqbi1IrFvrFFT5P93gtVK_-S9RHPD8j2ZYehWslbxCHNTjflJ3FQ==>
>>>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6IJVX_5Bwy2EYv2Om4Uoqbi1IrFvrFFT5P93gtVK_-S9RHPD8j2ZYehWslbxCHNTjflJ3FQ==>
>>>> SR. DATA SCIENTIST
>>>> *(217) 390-3033 <(217)%20390-3033> *
>>>>
>>>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6aYBXp5Bwkj8Yt26o7V9tY2xG7AT0Rg67f4mh_kvg5SJTGvzkPR5lDjAs2v4GbGBNMJDdIRE73ZF7DOALBBd1j_K1J1UKuUUpj_AyDJ8ZllyxokHcBbtKPehZRza4zDhf6zpnuymykut1niB1JbqKK96MKYMtu5-q>
>>>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEheg

Re: Manually import notes via copy notebook folder

2018-04-27 Thread Spico Florin
p6woB_C_0NDVN4m_L9cUhS4hsvmvklTZNanFKzohjWBroSP6pKCSO_yTBJ4nIrqzu0gOl_3Fw-d8T56QXDDVVHMd91IgY8rNX-O2u_CKpwKE3q3f0X-K2GxU85tCWaaW0LsKsfiVyfLkCX>
>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6aYBXoIZg2TxapmGs50sqcS1IrFvrNH7HfYy_9CKC8CtGEPfqHlZ6Gjg6yv0ZcWNPVPsZ9o4FVu25ehqm1d76Kg==>[image:
>> PlaceIQ:CES 2018]
>> <https://share.polymail.io/v1/z/b/NWFlMzFjYTE4OTY4/dNnKmlylxn0x2Jr1438WvnPmk6G9pmCFrq-9ihEhegNoOwyUCu2nteJ2jwIYIJL2pVF_RRJW1V75GAPSdG1p_I2wd5qdbmcNKYN4B311uFBlOC-kxRMpOja1AG2u0SsUkQrShDM5X-_6aYBXoIZg2TxapmGs50sqcS1IrFvrRQuleJHHogmq8jxTDfzpYX5qHTYsyrofYiN0PYuEKwd33YZsINgyPDp1je_0egRfogAykP1Ep5CCQTPVKNbeoih8RD3P>
>>
>> On Fri, Apr 27, 2018 at 8:40 AM Spico Florin > > wrote:
>>
>>> Hello!
>>>   I would like to import notes into Zeppelin by manually overwriting the
>>> notebook folder.
>>> The files are copied in the notebook folder, but I cannot see them in
>>> the Zeppelin UI.
>>> Is any other place where Zeppelin is storing information about the
>>> notebooks?
>>> Besides the REST API,   to import the notes, is it possible to apply a
>>> procedure like the described one?
>>> I look forward for your solutions.
>>> Thanks.
>>> Best regards,
>>>  Florin
>>>
>>
>>


Manually import notes via copy notebook folder

2018-04-27 Thread Spico Florin
Hello!
  I would like to import notes into Zeppelin by manually overwriting the
notebook folder.
The files are copied in the notebook folder, but I cannot see them in the
Zeppelin UI.
Is any other place where Zeppelin is storing information about the
notebooks?
Besides the REST API,   to import the notes, is it possible to apply a
procedure like the described one?
I look forward for your solutions.
Thanks.
Best regards,
 Florin


Zeppelin execute job (all paragraphs) with parameters via REST API

2018-04-19 Thread Spico Florin
Hello!

I have a zeppelin note that has many paragraphs. I have one paragraph that
should receive/set up some parameters that will be further used by the
other paragraphs.

I would like to submit a job via zeppelin REST API that will with these
parameters set up in the body.

I know that in zeppelin there is a REST API service that runs a paragraph
with parameters in the body, example of such a call is

curl -H "Content-Type: application/json" -X POST -d '{ "params": {
"filename": "/myfolder/my_file.txt","min":0.89,"max":25} }'
http://zep_host:zep_port/api/notebook/job/noteid/pargarph_id

I would like to have something similar for running the job (all paragraphs)
with the parameters, without having to separated calls for doing this (one
post call to the paragraph that sets up the parameters, and one call to
submit the entire job).

I look forward for your solutions.

Thanks.

Florin