Re: zeppelin url append '/#/'

2019-03-21 Thread Jongyoul Lee
It's not 100% sure but mainly because of the behaviours of FE.

On Fri, Mar 22, 2019 at 7:25 AM Weilan Wu  wrote:

> Hi, I am observing that the zeppelin web server always append ‘/#/‘ to end
> of the url, what’s the purpose of this? (and which code/module is
> responsible for it)
>
> e.g. if i type in browser: http://host.myzeppelin.com/ the browser ends
> up showing (in address bar): http://host.myzeppelin.com/#/
>
> Thanks in advance,
>
> Weilan
>
>

-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


zeppelin url append '/#/'

2019-03-21 Thread Weilan Wu
Hi, I am observing that the zeppelin web server always append ‘/#/‘ to end of 
the url, what’s the purpose of this? (and which code/module is responsible for 
it)

e.g. if i type in browser: http://host.myzeppelin.com/ 
 the browser ends up showing (in address bar): 
http://host.myzeppelin.com/#/ 

Thanks in advance,

Weilan



Re: install helium packages

2019-03-21 Thread Jhon Anderson Cardenas Diaz
Is there something helium-related in the zeppelin logs once you start it ?
If there is some problem it should say.

Btw, once you open a notebook, if it has associated helium plugins,
zeppelin tries to download and install node and yarn on runtime, that makes
me wonder if it will works without internet connection.

El jue., 21 mar. 2019 a las 16:24, Lian Jiang ()
escribió:

> Any clue is highly appreciated!
>
> On Wed, Mar 20, 2019 at 9:27 PM Lian Jiang  wrote:
>
>> Hi,
>>
>> I am using Horton work HDP3.0 which has zeppelin 0.8.0. I followed
>> https://zeppelin.apache.org/docs/0.8.0/development/helium/writing_visualization_basic.html
>> to install helium viz packages. Since my environment does not have internet
>> access, I have to install packages into local registry. Here is what I did:
>>
>> step 1. added zeppelin.helium.localregistry.default in custom
>> zeppelin-site settings, pointing to a local helium folder on the zeppelin
>> master host. Restart zeppelin in ambari.
>> step 2. download https://s3.amazonaws.com/helium-package/helium.json to
>> the helium folder.
>> step 3. create a npm package tarball, upload the zeppelin master host,
>> unzip into the helium folder. I also updated the artifact to point to local
>> npm folder (e.g. from sogou-map-vis@1.0.0 to /u01/helium/sogou-map-vis)
>>
>> However, zeppelin's helium page does not list any viz packages. I
>> searched zeppelin.helium.localregistry.default in zeppelin source code
>> and found it only exist in the document writing_visualization_basic.md.
>>
>> What am I missing? Thanks a lot.
>>
>>


Re: install helium packages

2019-03-21 Thread Lian Jiang
Any clue is highly appreciated!

On Wed, Mar 20, 2019 at 9:27 PM Lian Jiang  wrote:

> Hi,
>
> I am using Horton work HDP3.0 which has zeppelin 0.8.0. I followed
> https://zeppelin.apache.org/docs/0.8.0/development/helium/writing_visualization_basic.html
> to install helium viz packages. Since my environment does not have internet
> access, I have to install packages into local registry. Here is what I did:
>
> step 1. added zeppelin.helium.localregistry.default in custom
> zeppelin-site settings, pointing to a local helium folder on the zeppelin
> master host. Restart zeppelin in ambari.
> step 2. download https://s3.amazonaws.com/helium-package/helium.json to
> the helium folder.
> step 3. create a npm package tarball, upload the zeppelin master host,
> unzip into the helium folder. I also updated the artifact to point to local
> npm folder (e.g. from sogou-map-vis@1.0.0 to /u01/helium/sogou-map-vis)
>
> However, zeppelin's helium page does not list any viz packages. I searched 
> zeppelin.helium.localregistry.default
> in zeppelin source code and found it only exist in the document
> writing_visualization_basic.md.
>
> What am I missing? Thanks a lot.
>
>


Zeppelin 0.8.1 & Spark User Impersonation

2019-03-21 Thread James Srinivasan
I'm using Zeppelin 0.8.1 with a Kerberized HDP 3 cluster. Without user
impersonation, it seems fine. With user impersonation, I get the
following stack trace trying to run println(sc.version). Google
suggest libthrift incompatibilities, but the one jar I could find is
the same on 0.8.0 vs 0.8.1 (I didn't get this error with user
impersonation on 0.8.0.)

Any help much appreciated,

java.lang.NoSuchMethodError:
com.facebook.fb303.FacebookService$Client.sendBaseOneway(Ljava/lang/String;Lorg/apache/thrift/TBase;)V
at 
com.facebook.fb303.FacebookService$Client.send_shutdown(FacebookService.java:436)
at com.facebook.fb303.FacebookService$Client.shutdown(FacebookService.java:430)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.close(HiveMetaStoreClient.java:498)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
at com.sun.proxy.$Proxy20.close(Unknown Source) at
org.apache.hadoop.hive.ql.metadata.Hive.close(Hive.java:291) at
org.apache.hadoop.hive.ql.metadata.Hive.access$000(Hive.java:137) at
org.apache.hadoop.hive.ql.metadata.Hive$1.remove(Hive.java:157) at
org.apache.hadoop.hive.ql.metadata.Hive.closeCurrent(Hive.java:261) at
org.apache.spark.deploy.security.HiveDelegationTokenProvider$$anonfun$obtainDelegationTokens$2.apply$mcV$sp(HiveDelegationTokenProvider.scala:114)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1361)
at 
org.apache.spark.deploy.security.HiveDelegationTokenProvider.obtainDelegationTokens(HiveDelegationTokenProvider.scala:113)
at 
org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anonfun$obtainDelegationTokens$2.apply(HadoopDelegationTokenManager.scala:132)
at 
org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anonfun$obtainDelegationTokens$2.apply(HadoopDelegationTokenManager.scala:130)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.Iterator$class.foreach(Iterator.scala:893) at
scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at
scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at 
org.apache.spark.deploy.security.HadoopDelegationTokenManager.obtainDelegationTokens(HadoopDelegationTokenManager.scala:130)
at 
org.apache.spark.deploy.yarn.security.YARNHadoopDelegationTokenManager.obtainDelegationTokens(YARNHadoopDelegationTokenManager.scala:56)
at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:388)
at 
org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:921)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169)
at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.(SparkContext.scala:500) at
org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493) at
org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934)
at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925)
at scala.Option.getOrElse(Option.scala:121) at
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.zeppelin.spark.BaseSparkScalaInterpreter.spark2CreateContext(BaseSparkScalaInterpreter.scala:246)
at 
org.apache.zeppelin.spark.BaseSparkScalaInterpreter.createSparkContext(BaseSparkScalaInterpreter.scala:178)
at 
org.apache.zeppelin.spark.SparkScala211Interpreter.open(SparkScala211Interpreter.scala:89)
at 
org.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:102)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:62)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:616)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at