for our internal Zeppelin users.
Thank you,
Ruslan Dautkhanov
spark.yarn.appMasterEnv.PYSPARK_PYTHON
/opt/cloudera/parcels/Anaconda3/bin/python
--
Ruslan Dautkhanov
On Fri, Oct 26, 2018 at 9:10 PM Jeff Zhang wrote:
> Hi Ruslan,
>
> I believe you can just set PYSPARK_PYTHON in spark interpreter setting to
> switch between python2 and python3
>
>
>
python /opt/cloudera/parcels/Anaconda3/bin/python
spark.yarn.appMasterEnv.PYSPARK_PYTHON
/opt/cloudera/parcels/Anaconda3/bin/python
--
Ruslan Dautkhanov
Try adding ZEPPELIN_INTP_CLASSPATH_OVERRIDES, for example,
export
ZEPPELIN_INTP_CLASSPATH_OVERRIDES=/etc/hive/conf:/var/lib/sqoop/ojdbc7.jar
--
Ruslan Dautkhanov
On Tue, Oct 23, 2018 at 9:40 PM Lian Jiang wrote:
> Hi,
>
> I am trying to use oracle jdbc to read oracle database tabl
profile at all,
but I haven't tested that.
You can test and let us know which way works for you.
--
Ruslan Dautkhanov
On Mon, Oct 15, 2018 at 11:48 AM Michael Williams
wrote:
> I understand it's possible to build and run Zeppelin using plain Hadoop,
> but we are always running on Cloudera cl
.
--
Ruslan Dautkhanov
On Fri, Oct 5, 2018 at 3:50 PM anirban chatterjee <
anirban.chatter...@gmail.com> wrote:
> this is exactly what I am looking for!
> When will this be PR be committed?
> also, is the autosense triggered by some shortcut command or always on?
> Thanks,
> Anir
Something like this is available on master, I think.
How this works you could see on
https://github.com/apache/zeppelin/pull/2972
(not sure why that particular PR wasn't committed though)
--
Ruslan Dautkhanov
On Fri, Oct 5, 2018 at 2:43 PM anirban chatterjee <
anirban.chatter...@gmail.
Thanks for bringing this up for discussion. My 2 cents below.
I am with Maksim and Felix on concerns with special characters now allowed
in notebook names, and also concerns with different charsets. Russian
language, for example, most commonly use iso-8859-5, koi-8r/u, windows-1251
charsets etc.
Thanks Jeff! Should there be a Zeppelin 0.8.1 release sometime soon with
all the fixes for issues that the users have faced in 0.8.0?
--
Ruslan Dautkhanov
On Mon, Jul 23, 2018 at 12:24 AM Jeff Zhang wrote:
>
> Thanks Ruslan, I will fix it.
>
> Ruslan Dautkhanov 于2018年7月23
/2812/files
--
Ruslan Dautkhanov
On Wed, Aug 8, 2018 at 10:01 AM Paul Brenner wrote:
> ok, I went ahead and opened
> https://issues.apache.org/jira/browse/ZEPPELIN-3692
> <https://share.polymail.io/v1/z/b/NWI2
https://zeppelin.apache.org/ home page still reads
"WHAT'S NEW IN
Apache Zeppelin 0.7"
--
Ruslan Dautkhanov
On Fri, Jun 29, 2018 at 4:56 AM Spico Florin wrote:
> Hi!
> I tried to get the docker image for this version 0.8.0, but it seems
> that is not in the official do
Thank you luxun,
I left a couple of comments in that google document.
--
Ruslan Dautkhanov
On Tue, Jul 17, 2018 at 11:30 PM liuxun wrote:
> hi,Ruslan Dautkhanov
>
> Thank you very much for your question. according to your advice, I added 3
> schematics to illustrate.
>
users to a survived
instance?
Thanks,
Ruslan Dautkhanov
On Tue, Jul 17, 2018 at 2:46 AM liuxun wrote:
> hi:
>
> Our company installed and deployed a lot of zeppelin for data analysis.
> The single server version of zeppelin could not meet our application
> scenarios, so we trans
.
--
Ruslan Dautkhanov
On Wed, Jul 11, 2018 at 8:34 AM Paul Brenner wrote:
> I created https://issues.apache.org/jira/browse/ZEPPELIN-3616
> <https://share.polymail.io/v1/z/b/NWI0NjE0ZjY5ZWQ5/vfDe3e-5eTzwzo9OQ-55bPTEMeVmaOuuZP-yX5lXQ11joN96teso2sc3Evt0OMTYRlgcbbiNTR
I've seen this a couple of times..
--
Ruslan Dautkhanov
On Tue, Jul 10, 2018 at 2:34 PM Paul Brenner wrote:
> We are using 0.8 release and noticed that the editor section of paragraphs
> will randomly collapse when you leave a notebook open for a while. Clicking
> "hide ed
These two committed fixes aren't in 0.8.0
https://github.com/apache/zeppelin/pull/3045
https://github.com/apache/zeppelin/pull/3037
S
ee if one of them is relevant to your issue.
--
Ruslan Dautkhanov
On Mon, Jul 9, 2018 at 9:24 AM András Kolbert
wrote:
> The latest, 0.8
>
> On M
Which version of Zeppelin you're using?
If it's 0.7, try 0.8 I remember seeing some issues were fixed in 0.8 and in
master regarding this AD/LDAP groups...
--
Ruslan Dautkhanov
On Mon, Jul 9, 2018 at 3:23 AM kolbertand...@gmail.com <
kolbertand...@gmail.com> wrote:
> Hi,
>
> We
I assume some users are connecting to Spark in Zeppelin through Livy.
It seems Livy doesn't support `hadoop.security.auth_to_local` - filed
https://issues.apache.org/jira/browse/LIVY-481
Has anyone ran into this issue?
It seems Livy's `livy.server.auth.kerberos.name-rules` config was trying to
+1 to remove it
Setting default interpreter is not very useful anyway (for example, we
can't make %pyspark default without manually editing xml files in Zeppelin
distro). https://issues.apache.org/jira/browse/ZEPPELIN-3282
--
Ruslan Dautkhanov
On Fri, Jul 6, 2018 at 7:27 AM Paul Brenner
Great job. Congrats everyone involved.
--
Ruslan Dautkhanov
On Thu, Jun 28, 2018 at 9:47 AM Felix Cheung
wrote:
> Congrats and thanks for putting together the release
>
> --
> *From:* Miquel Angel Andreu Febrer
> *Sent:* Wednesday, June 27, 2
pache.zeppelin.interpreter.remote.RemoteInterpreterServer=DEBUG
> log4j.logger.org.glassfish.jersey.internal.inject.Providers=SEVERE
--
Ruslan Dautkhanov
On Wed, Jun 20, 2018 at 3:01 AM Alessandro Liparoti <
alessandro.l...@gmail.com> wrote:
> Hi,
> yes spark UI is a tool I already use for it but as Rusian mentioned would
> be
If you set pretty verbose level in log4j then you can see output in log
files. I've seen it there.
Then you can use regexps to strip out paragraph outputs from rest of
debugging messages.
May work as a one off effort. Might be a good idea to file an enhancement
request - this can be also useful
Can you send a screenshot with the error and complete exception stack?
--
Ruslan Dautkhanov
On Mon, Jun 4, 2018 at 10:40 AM, Michael Segel
wrote:
> Hmmm. Still not working.
> Added it to the interpreter setting and restarted the interpreter.
>
> The issue is that I need to
Nope add that as a spark interpreter setting.
0.7.2 should work fine with Spark 2.2 afaik.
You may want to go with Zeppelin 0.8 when you upgrade to Spark 2.3.
--
Ruslan Dautkhanov
On Mon, Jun 4, 2018 at 10:29 AM, Michael Segel
wrote:
> I’m assuming that I want to set this in ./conf/zeppe
You may want to check if %spark.dep
https://zeppelin.apache.org/docs/latest/interpreter/spark.html#3-dynamic-dependency-loading-via-sparkdep-interpreter
helps here.
--
Ruslan Dautkhanov
On Fri, May 25, 2018 at 12:46 PM, Michael Segel <msegel_had...@hotmail.com>
wrote:
> What’s the
Was anybody able to import notes on 0.8 RC or a recent master snapshot?
Notes import seems to be broken
Filed https://issues.apache.org/jira/browse/ZEPPELIN-3485
This looks serious to me.
--
Ruslan Dautkhanov
G: A HTTP GET method, public javax.ws.rs.core.Response
> org.apache.zeppelin.rest.CredentialRestApi.getCredentials(java.lang.String)
> throws java.io.IOException,java.lang.IllegalArgumentException, should not
> consume any entity.
--
Ruslan Dautkhanov
Thank you Jeff.
--
Ruslan Dautkhanov
On Wed, May 16, 2018 at 6:19 PM, Jeff Zhang <zjf...@gmail.com> wrote:
> Yes, the voting thread is on dev mail list.
>
> https://lists.apache.org/thread.html/c6435f3fcfab4c516e2ef90f436575
> 3268546293afa1ae2c50cc54f9@%3Cdev.zepp
I didn't know 0.8 rc1/rc2 were out. Was it advertised on the dev list?
Thanks for sharing this.
--
Ruslan Dautkhanov
On Sun, May 13, 2018 at 1:23 AM, Rotem Herzberg <
rotem.herzb...@gigaspaces.com> wrote:
> Hello all,
>
> I've downloaded and built the zeppelin v0.8
hat aren't available on latest official release.
Also it gives new features exposure to more testing, so it should be a
win-win for users and developers.
Some other open source projects employ nightly builds.
Thanks!
Ruslan Dautkhanov
Not sure if Spark-Cassandra connector would be helpful?
https://github.com/datastax/spark-cassandra-connector
--
Ruslan Dautkhanov
On Mon, Apr 30, 2018 at 7:38 AM, Soheil Pourbafrani <soheil.i...@gmail.com>
wrote:
> Is it possible to save a Cassandra query result in a variab
>> [ERROR] /home/monster/zeppelin/zeppelin-web/node/node: error while
loading shared libraries: libstdc++.so.6:
>> cannot open shared object file: No such file or directory
$ sudo yum install libstdc++.x86_64
would do?
On Tue, Apr 3, 2018 at 3:09 PM, Joaquín Silva <
Were you guys able to to use %spark.dep for %pyspark ?
According to documentation this should work:
https://zeppelin.apache.org/docs/0.7.2/interpreter/spark.html#dependency-management
" Note: %spark.dep interpreter loads libraries to %spark and %spark.pyspark but
not to %spark.sql interpreter. "
useIPython=true?)
If that's the case, how we can disable "IPython is available, use IPython
for PySparkInterpreter" warning ?
--
Ruslan Dautkhanov
t;Set spark.scheduler.pool to authenticated user name" ?
I still think it makes sense ..
--
Ruslan Dautkhanov
On Wed, Mar 14, 2018 at 6:32 PM, Jeff Zhang <zjf...@gmail.com> wrote:
>
> Globally shared mode means all the users shared the sparkcontext and also
> the same spar
?
--
Ruslan Dautkhanov
On Wed, Mar 14, 2018 at 4:57 PM, Ruslan Dautkhanov <dautkha...@gmail.com>
wrote:
> Let's say we have a Spark interpreter set up as
> " The interpreter will be instantiated *Globally *in *shared *process"
>
> When one user is using Spark
oncurrent users are getting PENDING in Zeppelin?
2. Does Zeppelin set *spark.scheduler.pool* accordingly as described above?
PS.
We have set following Spark interpreter settings:
- zeppelin.spark.concurrentSQL= true
- spark.scheduler.mode = FAIR
Thank you,
Ruslan Dautkhanov
/documentation/enterprise/release-notes/topics/rg_deprecated.html
--
Ruslan Dautkhanov
On Tue, Mar 13, 2018 at 5:45 PM, Jhon Anderson Cardenas Diaz <
jhonderson2...@gmail.com> wrote:
> Does this new feature work only for yarn-cluster ?. Or for spark
> standalone too ?
>
> El mar., 13 de
Thanks for sharing this moon !
Those are great ideas.
--
Ruslan Dautkhanov
On Wed, Mar 7, 2018 at 11:21 AM, moon soo Lee <m...@apache.org> wrote:
> Hi forks,
>
> There were an offline meeting yesterday at PaloAlto with contributors and
> users. We've shared idea a
> Zeppelin version: 0.8.0 (merged at September 2017 version)
https://issues.apache.org/jira/browse/ZEPPELIN-2898 was merged end of
September so not sure if you have that.
Check out
https://medium.com/@zjffdu/zeppelin-0-8-0-new-features-ea53e8810235 how to
set this up.
--
Ruslan Dautkha
Thanks Jeff!
That's great - our users were asking what are the highlights of the new
release.
--
Ruslan Dautkhanov
On Tue, Mar 13, 2018 at 10:07 AM, moon soo Lee <m...@apache.org> wrote:
> Looks great. I think online registry (helium) for visualization and spell
> is anoth
Thank you Maxim and Moon.
It was interesting to see most of the users are using official releases and
not builds from master,
and see some other insights too.
--
Ruslan Dautkhanov
On Wed, Feb 28, 2018 at 10:46 AM, moon soo Lee <m...@apache.org> wrote:
> Thanks for havi
ed message --
From: kpayson64 <notificati...@github.com>
Date: Mon, Feb 19, 2018 at 2:47 PM
Subject: Re: [grpc/grpc] Unicode support in Python 2? (#14446)
To: grpc/grpc <g...@noreply.github.com>
Cc: Ruslan Dautkhanov <dautkha...@gmail.com>, Author <
aut...@noreply.github.
quot;"" ---> 82 limit =
> len(df) > self.max_result 83 header_buf = StringIO("")
> 84 if show_index: TypeError: object of type 'DataFrame' has no len()
>
>
--
Ruslan Dautkhanov
ved data on
> closed stream
> INFO [2018-02-14 10:39:10,924] ({grpc-default-worker-ELG-1-2}
> AbstractClientStream2.java[inboundDataReceived]:249)
> - Received data on closed stream
> INFO [2018-02-14 10:39:10,925] ({grpc-default-worker-ELG-1-2}
> AbstractClientStream2.java[inboundDataReceived]:249) - Received data on
> closed stream
--
Ruslan Dautkhanov
f attempts to timeout the interpreter in the logs even at
DEBUG level.
Thanks,
Ruslan Dautkhanov
Thank you Jeff
--
Ruslan Dautkhanov
On Thu, Jan 11, 2018 at 1:57 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>
> ZEPPELIN-3119 will fix this. Will update this thread once it is done
>
>
>
>
> Ruslan Dautkhanov <dautkha...@gmail.com>于2017年12月29日周五 上午6:04写道:
&g
will contribute back to the project when we find solution.
Thanks for the suggestion Felix. Is this known if Zeppelin can work fine
with jasckson 2.*2*.3?
(certain dependencies currently list jackson 2.*5*.3)
--
Ruslan Dautkhanov
On Sat, Dec 16, 2017 at 3:03 AM, Felix Cheung <felixch
org.codehaus.jackson
> + jackson-mapper-asl
> +
> +
> + org.codehaus.jackson
> + jackson-core-asl
> +
> +
> + org.apache.zookeeper
> + zookeeper
> +
>
>
>
On Sun, Aug 27, 2017 at 2:25
t; fallback when ipython interpreter become much more mature.
>
>
>
>
> Ruslan Dautkhanov <dautkha...@gmail.com>于2017年12月11日周一 下午1:20写道:
>
>> Getting "IPython is available, use IPython for PySparkInterpreter"
>> warning after starting pyspark interpret
Getting "IPython is available, use IPython for PySparkInterpreter" warning
after starting pyspark interpreter.
How do I default %pyspark to ipython?
Tried to change to
"class": "org.apache.zeppelin.spark.PySparkInterpreter",
to
"class": "org.apache.zeppelin.spark.IPySparkInterpreter",
in
Would be nice if each user's interpreter is started in its own docker
container a-la cloudera data science workbench.
Then each user's shell interpreter is pretty isolated.
Actually, from a CDSW session you could pop up a terminal session to your
container which I found pretty neat.
--
Ruslan
Chrome can print to pdf. In Destination "printer" change to "Save as pdf".
--
Ruslan Dautkhanov
On Thu, Nov 9, 2017 at 10:31 AM, shyla deshpande <deshpandesh...@gmail.com>
wrote:
> Hello all,
>
> I want the users to be able to download the data in report
Sorry for bringing up an older topic .. I agree "latest" / "stable" makes a
lot of sense.
Also what was *not* discussed in this thread is release cadence target.
IMHO, 2-3 releases a year should give a quicker turnover to release latest
fixes and improvements / quicker feedback from the users?
That's awesome. Congrats everyone!
Hope to see 0.8.0 release soon too - it has nice new features we would love
to see.
--
Ruslan Dautkhanov
On Fri, Sep 22, 2017 at 1:36 AM, Mina Lee <mina...@apache.org> wrote:
> The Apache Zeppelin community is pleased to announce the ava
://issues.apache.org/jira/browse/ZEPPELIN-2040
completed ..
Thanks
--
Ruslan Dautkhanov
On Sun, Sep 10, 2017 at 9:13 PM, Yeshwanth Jagini <y...@yotabitesllc.com>
wrote:
> Cloudera Data Science workbench is totally a different product. Cloudera
> acquired it from https://sense.io/
>
&g
the newer version of Zeppelin
- they'll not show up in list of available notebooks.
--
Ruslan Dautkhanov
On Mon, Aug 28, 2017 at 6:07 PM, Jianfeng (Jeff) Zhang <
jzh...@hortonworks.com> wrote:
>
> Do you use the latest zeppelin master branch ? I see this issue before,
> but
patibility.
Can somebody please point me to PR / jria for this change?
Any workarounds that would make an upgrade easier?
Also, this change makes reverting zeppelin upgrades impossible.
--
Ruslan Dautkhanov
On Mon, Aug 28, 2017 at 11:35 AM, Ruslan Dautkhanov <dautkha...@gmail.com>
wrote
by: java.text.ParseException: Unparseable date: "2017-08-27
19:56:22.229"
at java.text.DateFormat.parse(DateFormat.java:357)
at
com.google.gson.internal.bind.DateTypeAdapter.deserializeToDate(DateTypeAdapter.java:79)
... 50 more
--
Ruslan Dautkhanov
On Mon, Aug 28, 2017 at 11:32
notebooks not show up?
Thanks,
Ruslan Dautkhanov
.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
--
Ruslan Dautkhanov
On Sun, Aug 27,
Building from a current Zeppelin snapshot fails with
zeppelin build fails with
org.apache.maven.plugins.enforcer.DependencyConvergence
see details below.
Build command
/opt/maven/maven-latest/bin/mvn clean package -DskipTests -Pspark-2.2
-Dhadoop.version=2.6.0-cdh5.12.0 -Phadoop-2.6 -Pvendor-repo
gt; mvn clean package -DskipTests -Pspark-2.1 -Dhadoop.version=2.6.0-cdh5.10.1
> -Phadoop-2.6 -Pvendor-repo -Pscala-2.10 -Psparkr -pl
> '!alluxio,!flink,!ignite,!lens,!cassandra,!bigquery,!scio' -e
You may needs additional steps depending which interpreters you use (like R
etc).
--
Rusla
It was built. I think binaries are only available for official releases?
--
Ruslan Dautkhanov
On Wed, Aug 2, 2017 at 4:41 PM, Benjamin Kim <bbuil...@gmail.com> wrote:
> Did you build Zeppelin or download the binary?
>
> On Wed, Aug 2, 2017 at 3:40 PM Ruslan Dautkhanov <da
Might need to recompile Zeppelin with Scala 2.11?
Also Spark 2.2 now requires JDK8 I believe.
--
Ruslan Dautkhanov
On Tue, Aug 1, 2017 at 6:26 PM, Benjamin Kim <bbuil...@gmail.com> wrote:
> Here is more.
>
> org.apache.zeppelin.interpreter.InterpreterException: WARNING:
Your example works fine for me too.
We're on Zeppelin snapshot ~2 months old.
--
Ruslan Dautkhanov
On Tue, Jul 11, 2017 at 3:11 PM, Ben Vogan <b...@shopkick.com> wrote:
> Here is the specific example that is failing:
>
> import pandas
> z.show(pandas.DataFrame([u'Jalape\x
S=/etc/hive/conf:/var/lib/sqoop/ojdbc7.jar
--
Ruslan Dautkhanov
On Mon, Jul 10, 2017 at 12:10 PM, <dar...@ontrenet.com> wrote:
> Hi
>
> We want to use a jdbc driver with pyspark through Zeppelin. Not the custom
> interpreter but from sqlContext where we can read into datafram
I think if you have a shared storage for notebooks (for example, NFS
mounted from a third server),
and a load-balancer that supports sticky sessions (like F5) on top, it
should be possible to have HA without
any code change in Zeppelin. Am I missing something?
--
Ruslan Dautkhanov
On Fri, Jun
ll. The
> python part of Airflow is really just describing what gets run and it isn't
> hard to run something that isn't written in python.
>
> On Fri, May 19, 2017 at 2:52 PM, Ruslan Dautkhanov <dautkha...@gmail.com>
> wrote:
>
>> We also use both Zeppelin a
Maven generates some of the web resource names, for example, css files.
- What are those hex ids in file names?
- Why those ids duplicate in file names up to 5 times? (see example below
in *bold*)
$ find . -name "main*css"
> ./spark-dependencies/target/spark-2.1.0/docs/css/main.css
>
>
Has anyone experienced below exception?
It started happening inconsistently after upgrade to a last week master
snapshot of Zeppelin.
We have multiple users reported the same issue.
java.lang.NullPointerException at
org.apache.zeppelin.spark.Utils.buildJobGroupId(Utils.java:112) at
as it gets
stuck):
[image: Inline image 2]
I think if Zeppelin could understand that there is an interactive prompt,
this will be helpful not only with password prompts but any other cases
(including shell interpreter).
--
Ruslan Dautkhanov
On Tue, May 9, 2017 at 4:59 PM, Ben Vogan <b...@shopki
Thanks for sharing this Jeff!
Once Zeppelin supports yarn-cluster, what would be main benefits of using
Livy Spark interpreters, instead of just the Spark interpreters?
--
Ruslan Dautkhanov
On Thu, May 4, 2017 at 10:51 PM, Jeff Zhang <zjf...@gmail.com> wrote:
> For anyone that
Hope to see this as implemented one day
https://issues.apache.org/jira/browse/ZEPPELIN-1774
On Wed, May 3, 2017 at 5:05 AM Petr Knez wrote:
> I know about feature (link to paragraph) but it not works if Zeppelin has
> enabled Shiro authorization.
> It works only for me (if
).
They run Zeppelin on edge nodes that have NFS mounts to a drop zone.
ps. Hue has a limit too, by default 100k rows
https://github.com/cloudera/hue/blob/release-3.12.0/desktop/conf.dist/hue.ini#L905
Not sure how much it scales up.
--
Ruslan Dautkhanov
On Tue, May 2, 2017 at 10:41 AM, Paul
ssage.java:69)
at
org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65)
at
org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextFrame(JettyListenerEventDriver.java:122)
--
Ruslan Dautkhanov
On Wed, Apr 26, 2017 at 2:13 PM
QueryInterpreter
> Comma separated interpreter configurations. First
> interpreter become a default
>
--
Ruslan Dautkhanov
On Sun, Mar 19, 2017 at 1:07 PM, moon soo Lee <m...@apache.org> wrote:
> Easiest way to figure out what your environment needs is,
>
> 1. run SPARK
preter will be instantiated
Globally in shared
process."
--
Ruslan Dautkhanov
On Thu, Apr 6, 2017 at 6:34 PM, Jeff Zhang <zjf...@gmail.com> wrote:
>
> What mode do you use ?
>
>
>
> Ruslan Dautkhanov <dautkha...@gmail.com>于2017年4月7日周五 上午12:49写道:
>
>>
out errors).
It will be a compromise between completely sequential run and having a way
to define a DAG.
--
Ruslan Dautkhanov
On Thu, Apr 6, 2017 at 1:32 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>
> That's correct, it needs define dependency between paragraphs, e.g.
> %spark
Filed https://issues.apache.org/jira/browse/ZEPPELIN-2368
We had users asking the same.. it forced them to run paragraphs one by one
manually.
--
Ruslan Dautkhanov
On Wed, Apr 5, 2017 at 4:57 PM, moon soo Lee <m...@apache.org> wrote:
> Hi,
>
> That's expected behavio
of %sh interpreter.
Is this a known issue?
--
Ruslan Dautkhanov
rkflow.
Thank you,
Ruslan Dautkhanov
On Wed, Apr 5, 2017 at 12:01 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>
> Hi Ruslan,
>
> Regarding 'make zeppelinContext available in shell interpreter', you may
> want to check https://issues.apache.org/jira/browse/ZEPPELIN-1595
>
sues.apache.org/jira/browse/ZEPPELIN-1660 "Home directory
references (i.e. ~/zeppelin/) in zeppelin-env.sh don't work as expected"
Less of a critical compared to the above two, but it could complement the
multi-tenancy feature very well.
Best regards,
Ruslan Dautkhanov
On Wed, Mar 22,
> from pyspark.conf import SparkConf
> ImportError: No module named *pyspark.conf*
William, you probably meant
from pyspark import SparkConf
?
--
Ruslan Dautkhanov
On Mon, Mar 20, 2017 at 2:12 PM, William Markito Oliveira <
william.mark...@gmail.com> wrote:
> Ah! Thanks Ru
You're right - it will not be dynamic.
You may want to check
https://issues.apache.org/jira/browse/ZEPPELIN-2195
https://github.com/apache/zeppelin/pull/2079
it seems it is fixed in a current snapshot of Zeppelin (comitted 3 weeks
ago).
--
Ruslan Dautkhanov
On Mon, Mar 20, 2017 at 1:21 PM
confusion.
--
Ruslan Dautkhanov
On Mon, Mar 20, 2017 at 12:59 PM, William Markito Oliveira <
mark...@apache.org> wrote:
> I'm trying to use zeppelin.pyspark.python as the variable to set the
> python that Spark worker nodes should use for my job, but it doesn't seem
> to be wo
https://issues.apache.org/jira/browse/ZEPPELIN-2197
This was created just yesterday :-)
On Wed, Mar 1, 2017 at 12:54 PM Alexander Filipchik
wrote:
> Hi,
>
> Is there any way to close an isolated interpreter after some timeout?
> Let's say set an inactivity timeout of 30
in Chrome
or anything like that.
--
Ruslan Dautkhanov
On Sun, Jan 29, 2017 at 2:43 PM, moon soo Lee <m...@apache.org> wrote:
> Hi,
>
> I'm not sure which action can possibly make output blinks and disappears.
> But
>
> ERROR [2017-01-28 11:13:53,338] ({pool-4-thread-1}
> Ap
>From the screenshot "JSON file size cannot exceed MB".
Notice there is no number between "exceed" and "MB".
Not sure if we're missing a setting or an environment variable to define
the limit?
It now prevents us from importing any notebooks.
--
Ruslan Dautkhan
Created https://issues.apache.org/jira/browse/ZEPPELIN-1967
(JIRA had some issues.. https://twitter.com/infrabot - had to wait a
couple of days.)
Great ideas. Thank you everyone.
--
Ruslan Dautkhanov
On Thu, Jan 12, 2017 at 8:55 AM, t p <tauis2...@gmail.com> wrote:
> Is somet
={var1} --param9={var2}
where var1 and var2 would be implied to be fetched as z.get('var1')
and z.get('var2') respectively.
Other thoughts?
Thank you,
Ruslan Dautkhanov
Thank you everyone for confirming this issue.
Created https://issues.apache.org/jira/browse/ZEPPELIN-1832
Thanks again.
--
Ruslan Dautkhanov
On Fri, Dec 16, 2016 at 2:48 AM, blaubaer <rene.pfitz...@nzz.ch> wrote:
> We are seeing this problem as well, regularly actually. E
We'd like to have paragraph's code generated by a preceding paragraph.
For example, one of the use cases we have
is when %pyspark generates Hive DDLs.
(can't run those in Spark in some cases)
Any chance an output of a paragraph can be redirected to a following
paragraph?
I was thinking something
I got a lucky jira number :-)
https://issues.apache.org/jira/browse/ZEPPELIN-1777
Thank you Jeff.
--
Ruslan Dautkhanov
On Thu, Dec 8, 2016 at 10:50 PM, Jeff Zhang <zjf...@gmail.com> wrote:
> hmm, I think so, please file a ticket for it.
>
>
>
> Ruslan Dautkhanov <da
image 1]
--
Ruslan Dautkhanov
On Wed, Nov 30, 2016 at 7:34 PM, Jeff Zhang <zjf...@gmail.com> wrote:
> Hi Ruslan,
>
> I miss another thing, You also need to delete file conf/interpreter.json
> which store the original setting. Otherwise the original setting is always
>
xport as a
PDF"
Please vote up if you would find that useful too.
Thank you.
--
Ruslan Dautkhanov
On Wed, Dec 7, 2016 at 10:32 PM, Hyunsung Jo <hyunsung...@gmail.com> wrote:
> Hi Ruslan,
>
> Not aware of Zeppelin's roadmap, but perhaps the tag line of the
> ZeppelinHub
Any easy way to get Spark Driver's URL (i.e. from sparkContext )?
I always have to go to CM -> YARN applications -> choose my Spark job ->
click Application Master etc. to get Spark's Driver UI.
Any way we could derive driver's URL programmatically from SparkContext
variable?
ps. Long haul - it
Until we have a good multitenancy support in Zeppelin, we'd have to run
individual Zeppelin instances for each user.
We were trying to use following shiro.ini configurations:
> [urls]
> /api/version = anon
> /** = user["rdautkhanov@CORP.DOMAIN"]
Also tried
> /** = authc,
"className": "org.apache.zeppelin.spark.SparkInterpreter",
to section
"className": "org.apache.zeppelin.spark.PySparkInterpreter",
pySpark is still not default.
--
Ruslan Dautkhanov
On Tue, Nov 29, 2016 at 10:36 PM, Jeff Zhang <zjf...@gmail.com> wrote:
> No, you don't need
Thank you Jeff.
Do I have to create interpreter/spark directory in $ZEPPELIN_HOME/conf
or in $ZEPPELIN_HOME directory?
So zeppelin.interpreters in zeppelin-site.xml is deprecated in 0.7?
Thanks!
--
Ruslan Dautkhanov
On Tue, Nov 29, 2016 at 6:54 PM, Jeff Zhang <zjf...@gmail.com>
1 - 100 of 121 matches
Mail list logo