Re: HTTP ERROR: 503

2017-08-04 Thread Suvayu Ali
Hello,

On Sat, Aug 05, 2017 at 11:03:01AM +0900, Park Hoon wrote:
> Hi, I read the whole log messages and found that you have invalid
> interpreter-setting.json
> 
> Please check that `conf/interpreter-setting.json` is valid.

Thanks a lot!  That was it.  I removed the conf/interepreter.json and started 
Zeppelin, and it is working now.

Cheers,

--
Suvayu

Open source is the future. It sets us free.


Re: Unable to use angular backend or front end API to print variables

2017-08-04 Thread Park Hoon
Hi, regarding the difference

- (backend)
https://zeppelin.apache.org/docs/latest/displaysystem/back-end-angular.html
- (front)
https://zeppelin.apache.org/docs/latest/displaysystem/front-end-angular.html

In short,

- backend angular API works with other interpreters like spark, and so on.
So you can use angular API `z.angular*` with the existing interpreters (if
it's supported)
- on the other hand, frontend angular API provides `%angular`

Regard,



On Sat, Jul 22, 2017 at 6:50 AM, Arun Natva  wrote:

>
> > Hi,
> > I am trying to use zeppelin angular Interpreter to build an interactive
> report.
> >
> > Installed zeppelin 0.6.0 on HDP 2.5.2. When I bind a value using
> z.angularBind(), the variable is not printing in the print statement.
> >
> > Can anyone please shed light on what I am missing ?
> >
> > Paragraph1:
> >
> > %spark
> >
> > z.angularBind("name", "Arun Natva")
> >
> >
> > Paragraph2:
> >
> > %spark
> >
> > Println("%angular  Hello {{name}}")
> >
> > Above code prints "Hello" instead of "Hello Arun Natva"
> >
> > I tried to start with %angular but it was throwing some other connection
> error.
> >
> > I tried adding maven dependencies of angular plugin in the zeppelin
> configuration but no luck.
> >
> > org.apache.zeppelin:zeppelin-angular:0.6.0
> >
> >
> > Also, what the the difference between Frontend vs Backend API for
> angular on Zeppelin ?
> >
> >
> > Appreciate your help !!
> >
> > Thanks!
> >
> > Sent from my iPhone
>


Re: Geo Map Charting

2017-08-04 Thread Park Hoon
Hi, if you are using 0.8.0+ snapshot you can use custom visualization from
the online registry.

- (map) https://github.com/volumeint/helium-volume-leaflet
- (heatmap) https://github.com/ZEPL/zeppelin-ultimate-heatmap-chart

Regard,

On Thu, Aug 3, 2017 at 10:12 AM, Jeff Zhang  wrote:

>
> Zeppelin support bokeh which support Geo charting.  Here's some links
> which might be useful for you
> https://community.hortonworks.com/articles/109837/use-bokeh-
> in-apache-zeppelin.html
> http://bokeh.pydata.org/en/latest/docs/user_guide/geo.html
>
>
>
> Benjamin Kim 于2017年8月3日周四 上午6:23写道:
>
>> Anyone every try to chart density clusters or heat maps onto a geo map of
>> the earth in Zeppelin? Can it be done?
>>
>> Cheers,
>> Ben
>>
>


Re: Cloudera Spark 2.2

2017-08-04 Thread Ruslan Dautkhanov
This should do:


> export SPARK_HOME=/opt/cloudera/parcels/SPARK2/lib/spark2
> export HIVE_HOME=/opt/cloudera/parcels/CDH/lib/hive
> export HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop
> export HADOOP_CONF_DIR=/etc/hadoop/conf
> export HIVE_CONF_DIR=/etc/hive/conf



>
> mvn clean package -DskipTests -Pspark-2.1 -Dhadoop.version=2.6.0-cdh5.10.1
> -Phadoop-2.6 -Pvendor-repo -Pscala-2.10 -Psparkr -pl
> '!alluxio,!flink,!ignite,!lens,!cassandra,!bigquery,!scio' -e


You may needs additional steps depending which interpreters you use (like R
etc).


-- 
Ruslan Dautkhanov

On Fri, Aug 4, 2017 at 8:31 AM, Benjamin Kim  wrote:

> Hi Ruslan,
>
> Can you send me the steps you used to build it, especially the Maven
> command with the arguments? I will try to build it also.
>
> I do believe that the binaries are for official releases.
>
> Cheers,
> Ben
>
>
> On Wed, Aug 2, 2017 at 3:44 PM Ruslan Dautkhanov 
> wrote:
>
>> It was built. I think binaries are only available for official releases?
>>
>>
>>
>> --
>> Ruslan Dautkhanov
>>
>> On Wed, Aug 2, 2017 at 4:41 PM, Benjamin Kim  wrote:
>>
>>> Did you build Zeppelin or download the binary?
>>>
>>> On Wed, Aug 2, 2017 at 3:40 PM Ruslan Dautkhanov 
>>> wrote:
>>>
 We're using an ~April snapshot of Zeppelin, so not sure about 0.7.1.

 Yes, we have that spark home in zeppelin-env.sh



 --
 Ruslan Dautkhanov

 On Wed, Aug 2, 2017 at 4:31 PM, Benjamin Kim 
 wrote:

> Does this work with Zeppelin 0.7.1? We an error when setting
> SPARK_HOME in zeppelin-env.sh to what you have below.
>
> On Wed, Aug 2, 2017 at 3:24 PM Ruslan Dautkhanov 
> wrote:
>
>> You don't have to use spark2-shell and spark2-submit to use Spark 2.
>> That can be controled by setting SPARK_HOME using regular
>> spark-submit/spark-shell.
>>
>> $ which spark-submit
>> /usr/bin/spark-submit
>> $ which spark-shell
>> /usr/bin/spark-shell
>>
>> $ spark-shell
>> Welcome to
>>     __
>>  / __/__  ___ _/ /__
>> _\ \/ _ \/ _ `/ __/  '_/
>>/___/ .__/\_,_/_/ /_/\_\   version 1.6.0
>>   /_/
>>
>>
>>
>> $ export SPARK_HOME=/opt/cloudera/parcels/SPARK2/lib/spark2
>>
>> $ spark-shell
>> Welcome to
>>     __
>>  / __/__  ___ _/ /__
>> _\ \/ _ \/ _ `/ __/  '_/
>>/___/ .__/\_,_/_/ /_/\_\   version 2.1.0.cloudera1
>>   /_/
>>
>>
>> spark-submit and spark-shell are just shell script wrappers.
>>
>>
>>
>> --
>> Ruslan Dautkhanov
>>
>> On Wed, Aug 2, 2017 at 10:22 AM, Benjamin Kim 
>> wrote:
>>
>>> According to the Zeppelin documentation, Zeppelin 0.7.1 supports
>>> Spark 2.1. But, I don't know if it supports Spark 2.2 or even 2.1 from
>>> Cloudera. For some reason, Cloudera defaults to Spark 1.6 and so does 
>>> the
>>> calls to spark-shell and spark-submit. To force the use of Spark 2.x, 
>>> the
>>> calls need to be spark2-shell and spark2-submit. I wonder if this is
>>> causing the problem. By the way, we are using Java8 corporate wide, and
>>> there seems to be no problems using Zeppelin.
>>>
>>> Cheers,
>>> Ben
>>>
>>> On Tue, Aug 1, 2017 at 7:05 PM Ruslan Dautkhanov <
>>> dautkha...@gmail.com> wrote:
>>>
 Might need to recompile Zeppelin with Scala 2.11?
 Also Spark 2.2 now requires JDK8 I believe.



 --
 Ruslan Dautkhanov

 On Tue, Aug 1, 2017 at 6:26 PM, Benjamin Kim 
 wrote:

> Here is more.
>
> org.apache.zeppelin.interpreter.InterpreterException: WARNING:
> User-defined SPARK_HOME (/opt/cloudera/parcels/SPARK2-
> 2.2.0.cloudera1-1.cdh5.12.0.p0.142354/lib/spark2) overrides
> detected (/opt/cloudera/parcels/SPARK2/lib/spark2).
> WARNING: Running spark-class from user-defined location.
> Exception in thread "main" java.lang.NoSuchMethodError:
> scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
> at org.apache.spark.util.Utils$.getDefaultPropertiesFile(
> Utils.scala:2103)
> at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$
> mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.
> scala:124)
> at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$
> mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.
> scala:124)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.deploy.SparkSubmitArguments.
> mergeDefaultSparkProperties(SparkSubmitArguments.scala:124)
> at 

Re: Cloudera Spark 2.2

2017-08-04 Thread Benjamin Kim
Hi Ruslan,

Can you send me the steps you used to build it, especially the Maven
command with the arguments? I will try to build it also.

I do believe that the binaries are for official releases.

Cheers,
Ben


On Wed, Aug 2, 2017 at 3:44 PM Ruslan Dautkhanov 
wrote:

> It was built. I think binaries are only available for official releases?
>
>
>
> --
> Ruslan Dautkhanov
>
> On Wed, Aug 2, 2017 at 4:41 PM, Benjamin Kim  wrote:
>
>> Did you build Zeppelin or download the binary?
>>
>> On Wed, Aug 2, 2017 at 3:40 PM Ruslan Dautkhanov 
>> wrote:
>>
>>> We're using an ~April snapshot of Zeppelin, so not sure about 0.7.1.
>>>
>>> Yes, we have that spark home in zeppelin-env.sh
>>>
>>>
>>>
>>> --
>>> Ruslan Dautkhanov
>>>
>>> On Wed, Aug 2, 2017 at 4:31 PM, Benjamin Kim  wrote:
>>>
 Does this work with Zeppelin 0.7.1? We an error when setting SPARK_HOME
 in zeppelin-env.sh to what you have below.

 On Wed, Aug 2, 2017 at 3:24 PM Ruslan Dautkhanov 
 wrote:

> You don't have to use spark2-shell and spark2-submit to use Spark 2.
> That can be controled by setting SPARK_HOME using regular
> spark-submit/spark-shell.
>
> $ which spark-submit
> /usr/bin/spark-submit
> $ which spark-shell
> /usr/bin/spark-shell
>
> $ spark-shell
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 1.6.0
>   /_/
>
>
>
> $ export SPARK_HOME=/opt/cloudera/parcels/SPARK2/lib/spark2
>
> $ spark-shell
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 2.1.0.cloudera1
>   /_/
>
>
> spark-submit and spark-shell are just shell script wrappers.
>
>
>
> --
> Ruslan Dautkhanov
>
> On Wed, Aug 2, 2017 at 10:22 AM, Benjamin Kim 
> wrote:
>
>> According to the Zeppelin documentation, Zeppelin 0.7.1 supports
>> Spark 2.1. But, I don't know if it supports Spark 2.2 or even 2.1 from
>> Cloudera. For some reason, Cloudera defaults to Spark 1.6 and so does the
>> calls to spark-shell and spark-submit. To force the use of Spark 2.x, the
>> calls need to be spark2-shell and spark2-submit. I wonder if this is
>> causing the problem. By the way, we are using Java8 corporate wide, and
>> there seems to be no problems using Zeppelin.
>>
>> Cheers,
>> Ben
>>
>> On Tue, Aug 1, 2017 at 7:05 PM Ruslan Dautkhanov <
>> dautkha...@gmail.com> wrote:
>>
>>> Might need to recompile Zeppelin with Scala 2.11?
>>> Also Spark 2.2 now requires JDK8 I believe.
>>>
>>>
>>>
>>> --
>>> Ruslan Dautkhanov
>>>
>>> On Tue, Aug 1, 2017 at 6:26 PM, Benjamin Kim 
>>> wrote:
>>>
 Here is more.

 org.apache.zeppelin.interpreter.InterpreterException: WARNING:
 User-defined SPARK_HOME
 (/opt/cloudera/parcels/SPARK2-2.2.0.cloudera1-1.cdh5.12.0.p0.142354/lib/spark2)
 overrides detected (/opt/cloudera/parcels/SPARK2/lib/spark2).
 WARNING: Running spark-class from user-defined location.
 Exception in thread "main" java.lang.NoSuchMethodError:
 scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
 at
 org.apache.spark.util.Utils$.getDefaultPropertiesFile(Utils.scala:2103)
 at
 org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.scala:124)
 at
 org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.scala:124)
 at scala.Option.getOrElse(Option.scala:120)
 at
 org.apache.spark.deploy.SparkSubmitArguments.mergeDefaultSparkProperties(SparkSubmitArguments.scala:124)
 at
 org.apache.spark.deploy.SparkSubmitArguments.(SparkSubmitArguments.scala:110)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

 Cheers,
 Ben


 On Tue, Aug 1, 2017 at 5:24 PM Jeff Zhang  wrote:

>
> Then it is due to some classpath issue. I am not sure familiar
> with CDH, please check whether spark of CDH include hadoop jar with 
> it.
>
>
> Benjamin Kim 于2017年8月2日周三 上午8:22写道:
>
>> Here is the error that was sent to me.
>>
>> org.apache.zeppelin.interpreter.InterpreterException: Exception
>> in thread "main" java.lang.NoClassDefFoundError:
>>