Re: Small tips when running Zeppelin on EMR

2016-05-18 Thread Hyung Sung Shim
Thank you for sharing great information!


2016-05-18 16:49 GMT+09:00 Ahyoung Ryu :

> Hi Kevin,
>
> Thanks for the sharing. It's really helpful indeed not only me but also to
> many others.
> I think *6.**Don't forget to terminate cluster when you're done your job* is
> the most important thing :)
> Is there any way I can see your slide? If so, it will be really
> appreciate.
>
> Best regards,
> Ahyoung
>
> 2016년 5월 18일 (수) 오후 3:19, Kevin (Sangwoo) Kim 님이 작성:
>
>> Hi Zeppelin users,
>>
>> I'v been presenting some demo on "Spark+Zeppelin on AWS EMR" at AWS
>> Summit Seoul yesterday. I'm so sad that the slides are written in Korean so
>> it's hard to share, but I'd like to share some essentials.
>>
>> 1. Running Z on EMR is super easy. (EMR team did really good job. You can
>> do that with only few clicks, took 8min to launch)
>>
>> 2. You can launch EMR with spot instances, it will save your money.
>>
>> 3. You can provide some configs when you launch EMR cluster, so you may
>> want to save your notebook on S3, proper config is as follow.
>>
>> [
>>   {
>> "Classification": "zeppelin-env",
>> "Properties": {},
>> "Configurations": [
>>   {
>> "Classification": "export",
>> "Properties": {
>>   “ZEPPELIN_NOTEBOOK_STORAGE"
>>  :"org.apache.zeppelin.notebook.repo.S3NotebookRepo",
>>   "ZEPPELIN_NOTEBOOK_S3_BUCKET": "BUCKET_NAME",
>>   "ZEPPELIN_NOTEBOOK_S3_USER": "SOME_USER_NAME"
>> },
>> "Configurations": []
>>   }
>> ]
>>   }
>> ]
>>
>> 4. You need to set proper spark.executor.memory in Zeppelin interpreter
>> setting.
>>
>> 5. You can increase or decrease cluster size in cluster detail page.
>>
>> 6. Don't forget to terminate cluster when you're done your job :)
>>
>> That's all!
>>
>>
>> If you have more tips, plz add it on this mail thread. Thanks!
>>
>> - Kevin
>>
>>
>>
>>


Re: method listing in Zeppellin

2016-05-17 Thread Hyung Sung Shim
If you are talking about completion you can put "Ctrl(or
Command)+Shift+Space".

2016-05-17 21:26 GMT+09:00 Snehotosh Banerjee 
:

> Hi,
>
> How to get the method listing after dot in the editor for Spark (both
> Scala,Python).
>
> Regards
> Snehotosh
>
>
>


Re: Blank home page and disconnected

2016-05-16 Thread Hyung Sung Shim
Thank you very much for your detail solution.
This will really help to others.

2016년 5월 17일 화요일, Sunita Koppar님이
작성한 메시지:

> Thank you for you help in resolving this Hyung. You are right, it was
> related to firewall. For benefit of others in corporate world facing the
> same issue, here are the details.
> The initial firewall request opened http, https and web browsing only. The
> firewall sees web socket as a different connection type compared to these
> and blocks it. After the network engineer changed the connection type to
> "websocket" based on Hyung's analysis, Zeppelin was able to connect.
>
> regards
> Sunita
>
> On Sun, May 15, 2016 at 10:07 PM, Sunita Koppar <
> sunita.kop...@verizondigitalmedia.com
> >
> wrote:
>
>> Ok. Thanks a lot for quick response.
>>
>> On Thu, May 12, 2016 at 11:05 PM, Hyung Sung Shim > > wrote:
>>
>>> Yes. that means this is a networking issue.
>>> I think you should ask the problem to your network engineer.
>>>
>>> 2016-05-13 13:42 GMT+09:00 Sunita Koppar <
>>> sunita.kop...@verizondigitalmedia.com
>>> >
>>> :
>>>
>>>> This fails. It says
>>>>
>>>> DISCONNECTED
>>>>
>>>> ERROR: undefined
>>>> For both secured and unsecured.
>>>>
>>>> regards
>>>> Sunita
>>>>
>>>> On Thu, May 12, 2016 at 8:44 PM, Hyung Sung Shim >>> > wrote:
>>>>
>>>>> Could you try like following image?
>>>>>
>>>>> [image: 본문 이미지 2]
>>>>>
>>>>> 2016-05-13 2:24 GMT+09:00 Sunita Koppar <
>>>>> sunita.kop...@verizondigitalmedia.com
>>>>> 
>>>>> >:
>>>>>
>>>>>> Assuming you wanted me to test this on my local machine (not the
>>>>>> clustered node running Zeppelin Server)
>>>>>>
>>>>>> Below is the output:
>>>>>>
>>>>>> regards
>>>>>> Sunita
>>>>>>
>>>>>> On Thu, May 12, 2016 at 9:46 AM, Hyung Sung Shim >>>>> > wrote:
>>>>>>
>>>>>>> I don't think pr868 is the reason. It seems your problem is the
>>>>>>> network issue.
>>>>>>> Can you do the websocket connection test on
>>>>>>> http://www.websocket.org/echo.html and share your result?
>>>>>>>
>>>>>>>
>>>>>>> 2016년 5월 13일 금요일, Sunita Koppar<
>>>>>>> sunita.kop...@verizondigitalmedia.com
>>>>>>> >님이
>>>>>>> 작성한 메시지:
>>>>>>>
>>>>>>>> Appreciate your quick responses. Thanks a lot.
>>>>>>>>
>>>>>>>> I am trying with this exact combination. Its a CDH 5.7 cluster with
>>>>>>>> Spark 1.6
>>>>>>>>
>>>>>>>> https://github.com/apache/incubator-zeppelin/pull/868 could this
>>>>>>>> be the reason?
>>>>>>>>
>>>>>>>> Any suggestions on how to debug the issue. I dont see any errors
>>>>>>>> anywhere.
>>>>>>>>
>>>>>>>>
>>>>>>>> regards
>>>>>>>>
>>>>>>>> Sunita
>>>>>>>>
>>>>>>>> On Thu, May 12, 2016 at 9:00 AM, Hyung Sung Shim >>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Since 0.5.5 version, Zeppelin uses single port for REST
>>>>>>>>> and websocket so you don't need to care about 8092 port in your case.
>>>>>>>>>
>>>>>>>>> 2016년 5월 13일 금요일, Sunita Koppar<
>>>>>>>>> sunita.kop...@verizondigitalmedia.com>님이 작성한 메시지:
>>>>>>>>>
>>>>>>>>> Thanks for the response.
>>>>>>>>>> Yes there is a firewall but that has been opened for access. The
>>>>>>>>>> zeppelin server runs on 8091 and that is open. From the comments, it 
>>>>>>>>>> seems
>>>>>>>>>> 8092 needs to be open as well. From firewall perspective 8092 is 
>>>>>>>>>> open as
>>>>>>>>>> well (we have requested for a 

Re: updated to cloudera CDH5.7.0 and zeppelin stopped working

2016-05-14 Thread Hyung Sung Shim
This issue is related to
https://github.com/apache/incubator-zeppelin/pull/868.

2016년 5월 14일 토요일, Preeti님이 작성한 메시지:

> Hi,
>
> There was a recent server upgrade to CDH 5.7.0 and zeppelin 0.5.6 that
> used to work has stopped working now. I get the following error:
>
> java.lang.NoSuchMethodException:
> org.apache.spark.repl.SparkILoop$SparkILoopInterpreter.classServerUri() at
> java.lang.Class.getMethod(Class.java:1665) at
> org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:271)
> at
> org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:145)
> at
> org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:465)
> at
> org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)
> at
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)
> at
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92)
> at
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:300)
> at org.apache.zeppelin.scheduler.Job.run(Job.java:169) at
> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:134)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> ERROR
> Took 5 seconds
>
> Could you please help me? What am I missing?
>
> Thanks,
> Preethi
>
>


Re: Blank home page and disconnected

2016-05-12 Thread Hyung Sung Shim
Yes. that means this is a networking issue.
I think you should ask the problem to your network engineer.

2016-05-13 13:42 GMT+09:00 Sunita Koppar <
sunita.kop...@verizondigitalmedia.com>:

> This fails. It says
>
> DISCONNECTED
>
> ERROR: undefined
> For both secured and unsecured.
>
> regards
> Sunita
>
> On Thu, May 12, 2016 at 8:44 PM, Hyung Sung Shim 
> wrote:
>
>> Could you try like following image?
>>
>> [image: 본문 이미지 2]
>>
>> 2016-05-13 2:24 GMT+09:00 Sunita Koppar <
>> sunita.kop...@verizondigitalmedia.com>:
>>
>>> Assuming you wanted me to test this on my local machine (not the
>>> clustered node running Zeppelin Server)
>>>
>>> Below is the output:
>>>
>>> regards
>>> Sunita
>>>
>>> On Thu, May 12, 2016 at 9:46 AM, Hyung Sung Shim 
>>> wrote:
>>>
>>>> I don't think pr868 is the reason. It seems your problem is the network
>>>> issue.
>>>> Can you do the websocket connection test on
>>>> http://www.websocket.org/echo.html and share your result?
>>>>
>>>>
>>>> 2016년 5월 13일 금요일, Sunita Koppar님이
>>>> 작성한 메시지:
>>>>
>>>>> Appreciate your quick responses. Thanks a lot.
>>>>>
>>>>> I am trying with this exact combination. Its a CDH 5.7 cluster with
>>>>> Spark 1.6
>>>>>
>>>>> https://github.com/apache/incubator-zeppelin/pull/868 could this be
>>>>> the reason?
>>>>>
>>>>> Any suggestions on how to debug the issue. I dont see any errors
>>>>> anywhere.
>>>>>
>>>>>
>>>>> regards
>>>>>
>>>>> Sunita
>>>>>
>>>>> On Thu, May 12, 2016 at 9:00 AM, Hyung Sung Shim 
>>>>> wrote:
>>>>>
>>>>>> Since 0.5.5 version, Zeppelin uses single port for REST
>>>>>> and websocket so you don't need to care about 8092 port in your case.
>>>>>>
>>>>>> 2016년 5월 13일 금요일, Sunita Koppar님이
>>>>>> 작성한 메시지:
>>>>>>
>>>>>> Thanks for the response.
>>>>>>> Yes there is a firewall but that has been opened for access. The
>>>>>>> zeppelin server runs on 8091 and that is open. From the comments, it 
>>>>>>> seems
>>>>>>> 8092 needs to be open as well. From firewall perspective 8092 is open as
>>>>>>> well (we have requested for a bracket of sockets to be opened which
>>>>>>> includes 8092), but I am not sure if zeppelin client opens the websocket
>>>>>>> over existing connection or as an independent socket connection from
>>>>>>> browser back to server. Pardon my ignorance on the subject.
>>>>>>>
>>>>>>> Here is the netstat output of the node running the server:
>>>>>>>
>>>>>>> $ netstat -na |grep 8092
>>>>>>> $ netstat -na |grep 8091
>>>>>>> tcp0  0 0.0.0.0:80910.0.0.0:*
>>>>>>> LISTEN
>>>>>>> tcp0  0 0.0.0.0:18091   0.0.0.0:*
>>>>>>> LISTEN
>>>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:20673
>>>>>>> FIN_WAIT2
>>>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:21277
>>>>>>> FIN_WAIT2
>>>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:42529
>>>>>>> ESTABLISHED
>>>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:27722
>>>>>>> FIN_WAIT2
>>>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:63170
>>>>>>> FIN_WAIT2
>>>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:8542
>>>>>>>  FIN_WAIT2
>>>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:7125
>>>>>>>  FIN_WAIT2
>>>>>>> tcp0  1 10.240.41.55:8091   192.229.234.2:16906
>>>>>>> FIN_WAIT1
>>>>>>>
>>>>>>>
>>>>>>> zeppelin-site.xml has:
>>>>>>>
>>>>>>> 
>>>>>>>   zeppelin.server.port
>>>>>>>   8091
>&

Re: Blank home page and disconnected

2016-05-12 Thread Hyung Sung Shim
Could you try like following image?

[image: 본문 이미지 2]

2016-05-13 2:24 GMT+09:00 Sunita Koppar <
sunita.kop...@verizondigitalmedia.com>:

> Assuming you wanted me to test this on my local machine (not the clustered
> node running Zeppelin Server)
>
> Below is the output:
>
> regards
> Sunita
>
> On Thu, May 12, 2016 at 9:46 AM, Hyung Sung Shim 
> wrote:
>
>> I don't think pr868 is the reason. It seems your problem is the network
>> issue.
>> Can you do the websocket connection test on
>> http://www.websocket.org/echo.html and share your result?
>>
>>
>> 2016년 5월 13일 금요일, Sunita Koppar님이
>> 작성한 메시지:
>>
>>> Appreciate your quick responses. Thanks a lot.
>>>
>>> I am trying with this exact combination. Its a CDH 5.7 cluster with
>>> Spark 1.6
>>>
>>> https://github.com/apache/incubator-zeppelin/pull/868 could this be the
>>> reason?
>>>
>>> Any suggestions on how to debug the issue. I dont see any errors
>>> anywhere.
>>>
>>>
>>> regards
>>>
>>> Sunita
>>>
>>> On Thu, May 12, 2016 at 9:00 AM, Hyung Sung Shim 
>>> wrote:
>>>
>>>> Since 0.5.5 version, Zeppelin uses single port for REST and websocket
>>>> so you don't need to care about 8092 port in your case.
>>>>
>>>> 2016년 5월 13일 금요일, Sunita Koppar님이
>>>> 작성한 메시지:
>>>>
>>>> Thanks for the response.
>>>>> Yes there is a firewall but that has been opened for access. The
>>>>> zeppelin server runs on 8091 and that is open. From the comments, it seems
>>>>> 8092 needs to be open as well. From firewall perspective 8092 is open as
>>>>> well (we have requested for a bracket of sockets to be opened which
>>>>> includes 8092), but I am not sure if zeppelin client opens the websocket
>>>>> over existing connection or as an independent socket connection from
>>>>> browser back to server. Pardon my ignorance on the subject.
>>>>>
>>>>> Here is the netstat output of the node running the server:
>>>>>
>>>>> $ netstat -na |grep 8092
>>>>> $ netstat -na |grep 8091
>>>>> tcp0  0 0.0.0.0:80910.0.0.0:*
>>>>> LISTEN
>>>>> tcp0  0 0.0.0.0:18091   0.0.0.0:*
>>>>> LISTEN
>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:20673
>>>>> FIN_WAIT2
>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:21277
>>>>> FIN_WAIT2
>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:42529
>>>>> ESTABLISHED
>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:27722
>>>>> FIN_WAIT2
>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:63170
>>>>> FIN_WAIT2
>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:8542
>>>>>  FIN_WAIT2
>>>>> tcp0  0 10.240.41.55:8091   192.229.234.2:7125
>>>>>  FIN_WAIT2
>>>>> tcp0  1 10.240.41.55:8091   192.229.234.2:16906
>>>>> FIN_WAIT1
>>>>>
>>>>>
>>>>> zeppelin-site.xml has:
>>>>>
>>>>> 
>>>>>   zeppelin.server.port
>>>>>   8091
>>>>>   Server port.
>>>>> 
>>>>>
>>>>>
>>>>> regards
>>>>> Sunita
>>>>>
>>>>> On Wed, May 11, 2016 at 8:00 PM, Hyung Sung Shim 
>>>>> wrote:
>>>>>
>>>>>> Hello.
>>>>>>
>>>>>> Zeppelin uses websocket connection. Do you have some firewall/proxy
>>>>>> between your browser and zeppelin server daemon? If that does not pass
>>>>>> websocket connection, start page may show 'Disconnected'.
>>>>>>
>>>>>>
>>>>>> 2016-05-12 4:09 GMT+09:00 Sunita Koppar <
>>>>>> sunita.kop...@verizondigitalmedia.com>:
>>>>>>
>>>>>>> Hello Experts,
>>>>>>>
>>>>>>> I am beginning to explore Apache Zeppelin and have set it up on one
>>>>>>> of our development cluster nodes. The hadoop version is Hadoop
>>>>>>> 2.6.0-cdh5.7.0, Spark - 1.6, maven 3.3.9 (

Re: Blank home page and disconnected

2016-05-12 Thread Hyung Sung Shim
I don't think pr868 is the reason. It seems your problem is the network
issue.
Can you do the websocket connection test on
http://www.websocket.org/echo.html and share your result?


2016년 5월 13일 금요일, Sunita Koppar님이
작성한 메시지:

> Appreciate your quick responses. Thanks a lot.
>
> I am trying with this exact combination. Its a CDH 5.7 cluster with Spark
> 1.6
>
> https://github.com/apache/incubator-zeppelin/pull/868 could this be the
> reason?
>
> Any suggestions on how to debug the issue. I dont see any errors anywhere.
>
>
> regards
>
> Sunita
>
> On Thu, May 12, 2016 at 9:00 AM, Hyung Sung Shim 
> wrote:
>
>> Since 0.5.5 version, Zeppelin uses single port for REST and websocket so
>> you don't need to care about 8092 port in your case.
>>
>> 2016년 5월 13일 금요일, Sunita Koppar님이
>> 작성한 메시지:
>>
>> Thanks for the response.
>>> Yes there is a firewall but that has been opened for access. The
>>> zeppelin server runs on 8091 and that is open. From the comments, it seems
>>> 8092 needs to be open as well. From firewall perspective 8092 is open as
>>> well (we have requested for a bracket of sockets to be opened which
>>> includes 8092), but I am not sure if zeppelin client opens the websocket
>>> over existing connection or as an independent socket connection from
>>> browser back to server. Pardon my ignorance on the subject.
>>>
>>> Here is the netstat output of the node running the server:
>>>
>>> $ netstat -na |grep 8092
>>> $ netstat -na |grep 8091
>>> tcp0  0 0.0.0.0:80910.0.0.0:*
>>> LISTEN
>>> tcp0  0 0.0.0.0:18091   0.0.0.0:*
>>> LISTEN
>>> tcp0  0 10.240.41.55:8091   192.229.234.2:20673
>>> FIN_WAIT2
>>> tcp0  0 10.240.41.55:8091   192.229.234.2:21277
>>> FIN_WAIT2
>>> tcp0  0 10.240.41.55:8091   192.229.234.2:42529
>>> ESTABLISHED
>>> tcp0  0 10.240.41.55:8091   192.229.234.2:27722
>>> FIN_WAIT2
>>> tcp0  0 10.240.41.55:8091   192.229.234.2:63170
>>> FIN_WAIT2
>>> tcp0  0 10.240.41.55:8091   192.229.234.2:8542
>>>  FIN_WAIT2
>>> tcp0  0 10.240.41.55:8091   192.229.234.2:7125
>>>  FIN_WAIT2
>>> tcp0  1 10.240.41.55:8091   192.229.234.2:16906
>>> FIN_WAIT1
>>>
>>>
>>> zeppelin-site.xml has:
>>>
>>> 
>>>   zeppelin.server.port
>>>   8091
>>>   Server port.
>>> 
>>>
>>>
>>> regards
>>> Sunita
>>>
>>> On Wed, May 11, 2016 at 8:00 PM, Hyung Sung Shim 
>>> wrote:
>>>
>>>> Hello.
>>>>
>>>> Zeppelin uses websocket connection. Do you have some firewall/proxy
>>>> between your browser and zeppelin server daemon? If that does not pass
>>>> websocket connection, start page may show 'Disconnected'.
>>>>
>>>>
>>>> 2016-05-12 4:09 GMT+09:00 Sunita Koppar <
>>>> sunita.kop...@verizondigitalmedia.com>:
>>>>
>>>>> Hello Experts,
>>>>>
>>>>> I am beginning to explore Apache Zeppelin and have set it up on one of
>>>>> our development cluster nodes. The hadoop version is Hadoop 
>>>>> 2.6.0-cdh5.7.0,
>>>>> Spark - 1.6, maven 3.3.9 (Had an error while building one of the
>>>>> dependencies and it mentioned the version should be atleast 3.1.0)
>>>>>
>>>>> I have set the below properties in zeppelin-env.sh
>>>>>
>>>>>
>>>>> *export JAVA_HOME=/home/zeppelin/prerequisites/jdk1.7.0_79*
>>>>> *export MASTER=yarn-client   # Spark
>>>>> master url. eg. spark://master_addr:7077. Leave empty if you want to use
>>>>> local mode.*
>>>>> *export
>>>>> HADOOP_CONF_DIR=/etc/hadoop/conf.cloudera.yarn:/etc/hive/conf.cloudera.hive*
>>>>> *export
>>>>> HADOOP_HOME=/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/hadoop*
>>>>> *export
>>>>> SPARK_HOME=/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/spark*
>>>>>
>>>>> and built as root with below command:
>>>>>
>>>>> *mvn clean package -Pspark-1.6 -Dhadoop.version=2.6.0-cdh5.7.0
>>>>> -Phadoop-2.6 -Pyarn -DskipTests*
>>>>>

Re: Blank home page and disconnected

2016-05-12 Thread Hyung Sung Shim
Since 0.5.5 version, Zeppelin uses single port for REST and websocket so
you don't need to care about 8092 port in your case.

2016년 5월 13일 금요일, Sunita Koppar님이
작성한 메시지:

> Thanks for the response.
> Yes there is a firewall but that has been opened for access. The zeppelin
> server runs on 8091 and that is open. From the comments, it seems 8092
> needs to be open as well. From firewall perspective 8092 is open as well
> (we have requested for a bracket of sockets to be opened which includes
> 8092), but I am not sure if zeppelin client opens the websocket over
> existing connection or as an independent socket connection from browser
> back to server. Pardon my ignorance on the subject.
>
> Here is the netstat output of the node running the server:
>
> $ netstat -na |grep 8092
> $ netstat -na |grep 8091
> tcp0  0 0.0.0.0:80910.0.0.0:*   LISTEN
> tcp0  0 0.0.0.0:18091   0.0.0.0:*   LISTEN
> tcp0  0 10.240.41.55:8091   192.229.234.2:20673
> FIN_WAIT2
> tcp0  0 10.240.41.55:8091   192.229.234.2:21277
> FIN_WAIT2
> tcp0  0 10.240.41.55:8091   192.229.234.2:42529
> ESTABLISHED
> tcp0  0 10.240.41.55:8091   192.229.234.2:27722
> FIN_WAIT2
> tcp0  0 10.240.41.55:8091   192.229.234.2:63170
> FIN_WAIT2
> tcp0  0 10.240.41.55:8091   192.229.234.2:8542
>  FIN_WAIT2
> tcp0  0 10.240.41.55:8091   192.229.234.2:7125
>  FIN_WAIT2
> tcp0  1 10.240.41.55:8091   192.229.234.2:16906
> FIN_WAIT1
>
>
> zeppelin-site.xml has:
>
> 
>   zeppelin.server.port
>   8091
>   Server port.
> 
>
>
> regards
> Sunita
>
> On Wed, May 11, 2016 at 8:00 PM, Hyung Sung Shim  > wrote:
>
>> Hello.
>>
>> Zeppelin uses websocket connection. Do you have some firewall/proxy
>> between your browser and zeppelin server daemon? If that does not pass
>> websocket connection, start page may show 'Disconnected'.
>>
>>
>> 2016-05-12 4:09 GMT+09:00 Sunita Koppar <
>> sunita.kop...@verizondigitalmedia.com
>> >:
>>
>>> Hello Experts,
>>>
>>> I am beginning to explore Apache Zeppelin and have set it up on one of
>>> our development cluster nodes. The hadoop version is Hadoop 2.6.0-cdh5.7.0,
>>> Spark - 1.6, maven 3.3.9 (Had an error while building one of the
>>> dependencies and it mentioned the version should be atleast 3.1.0)
>>>
>>> I have set the below properties in zeppelin-env.sh
>>>
>>>
>>> *export JAVA_HOME=/home/zeppelin/prerequisites/jdk1.7.0_79*
>>> *export MASTER=yarn-client   # Spark master
>>> url. eg. spark://master_addr:7077. Leave empty if you want to use local
>>> mode.*
>>> *export
>>> HADOOP_CONF_DIR=/etc/hadoop/conf.cloudera.yarn:/etc/hive/conf.cloudera.hive*
>>> *export
>>> HADOOP_HOME=/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/hadoop*
>>> *export
>>> SPARK_HOME=/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/spark*
>>>
>>> and built as root with below command:
>>>
>>> *mvn clean package -Pspark-1.6 -Dhadoop.version=2.6.0-cdh5.7.0
>>> -Phadoop-2.6 -Pyarn -DskipTests*
>>>
>>> Since default port 8080 threw address in use exception (some cloudera
>>> services use it I guess), I changed this to 8091.
>>>
>>> ./bin/zeppelin-daemon.sh start -> works and the server is started,
>>> however, there is nothing displayed on the home screen and the status shows
>>> disconnected.
>>> There are no errors in the zeppelin-root-xxx.log. Only relevant entry
>>> could be (which I am not sure is an issue)
>>>
>>>
>>> *INFO [2016-05-11 18:53:05,910] ({main}
>>> StandardDescriptorProcessor.java[visitServlet]:297) - NO JSP Support for /,
>>> did not find org.eclipse.jetty.jsp.JettyJspServlet*
>>>
>>>
>>> The zeppelin-root-xxx.out file shows has the below entry as the latest:
>>>
>>> *May 11, 2016 6:53:08 PM
>>> com.sun.jersey.server.impl.application.WebApplicationImpl _initiate*
>>> *INFO: Initiating Jersey application, version 'Jersey: 1.13 06/29/2012
>>> 05:14 PM'*
>>> *May 11, 2016 6:53:09 PM com.sun.jersey.spi.inject.Errors
>>> processErrorMessages*
>>> *WARNING: The following warnings have been detected with resource and/or
>>> provider classes:*
>>> *  WARNING: A HTTP GET method, public javax.ws.rs.core.Response
>>> org.apache.zeppelin.rest.InterpreterRestApi.listInterpreter(java.lang.String),
>>> should not consume any entity.*
>>> *  WARNING: A sub-resource method, public javax.ws.rs.core.Response
>>> org.apache.zeppelin.rest.NotebookRestApi.createNote(java.lang.String)
>>> throws java.io.IOException, with URI template, "/", is treated as a
>>> resource method*
>>> *  WARNING: A sub-resource method, public javax.ws.rs.core.Response
>>> org.apache.zeppelin.rest.NotebookRestApi.getNotebookList() throws
>>> java.io.IOException, with URI template, "/", is treated as a resource
>>> method*
>>>
>>>
>>> Appreciate any help in this regard
>>>
>>> [image: Inline image 1]
>>>
>>> regards
>>> Sunita
>>>
>>>
>>
>


Re: Blank home page and disconnected

2016-05-11 Thread Hyung Sung Shim
Hello.

Zeppelin uses websocket connection. Do you have some firewall/proxy between
your browser and zeppelin server daemon? If that does not pass websocket
connection, start page may show 'Disconnected'.


2016-05-12 4:09 GMT+09:00 Sunita Koppar <
sunita.kop...@verizondigitalmedia.com>:

> Hello Experts,
>
> I am beginning to explore Apache Zeppelin and have set it up on one of our
> development cluster nodes. The hadoop version is Hadoop 2.6.0-cdh5.7.0,
> Spark - 1.6, maven 3.3.9 (Had an error while building one of the
> dependencies and it mentioned the version should be atleast 3.1.0)
>
> I have set the below properties in zeppelin-env.sh
>
>
> *export JAVA_HOME=/home/zeppelin/prerequisites/jdk1.7.0_79*
> *export MASTER=yarn-client   # Spark master
> url. eg. spark://master_addr:7077. Leave empty if you want to use local
> mode.*
> *export
> HADOOP_CONF_DIR=/etc/hadoop/conf.cloudera.yarn:/etc/hive/conf.cloudera.hive*
> *export
> HADOOP_HOME=/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/hadoop*
> *export
> SPARK_HOME=/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/spark*
>
> and built as root with below command:
>
> *mvn clean package -Pspark-1.6 -Dhadoop.version=2.6.0-cdh5.7.0
> -Phadoop-2.6 -Pyarn -DskipTests*
>
> Since default port 8080 threw address in use exception (some cloudera
> services use it I guess), I changed this to 8091.
>
> ./bin/zeppelin-daemon.sh start -> works and the server is started,
> however, there is nothing displayed on the home screen and the status shows
> disconnected.
> There are no errors in the zeppelin-root-xxx.log. Only relevant entry
> could be (which I am not sure is an issue)
>
>
> *INFO [2016-05-11 18:53:05,910] ({main}
> StandardDescriptorProcessor.java[visitServlet]:297) - NO JSP Support for /,
> did not find org.eclipse.jetty.jsp.JettyJspServlet*
>
>
> The zeppelin-root-xxx.out file shows has the below entry as the latest:
>
> *May 11, 2016 6:53:08 PM
> com.sun.jersey.server.impl.application.WebApplicationImpl _initiate*
> *INFO: Initiating Jersey application, version 'Jersey: 1.13 06/29/2012
> 05:14 PM'*
> *May 11, 2016 6:53:09 PM com.sun.jersey.spi.inject.Errors
> processErrorMessages*
> *WARNING: The following warnings have been detected with resource and/or
> provider classes:*
> *  WARNING: A HTTP GET method, public javax.ws.rs.core.Response
> org.apache.zeppelin.rest.InterpreterRestApi.listInterpreter(java.lang.String),
> should not consume any entity.*
> *  WARNING: A sub-resource method, public javax.ws.rs.core.Response
> org.apache.zeppelin.rest.NotebookRestApi.createNote(java.lang.String)
> throws java.io.IOException, with URI template, "/", is treated as a
> resource method*
> *  WARNING: A sub-resource method, public javax.ws.rs.core.Response
> org.apache.zeppelin.rest.NotebookRestApi.getNotebookList() throws
> java.io.IOException, with URI template, "/", is treated as a resource
> method*
>
>
> Appreciate any help in this regard
>
> [image: Inline image 1]
>
> regards
> Sunita
>
>


Re: Zeppelin Cannot Find Registered Table

2016-05-01 Thread Hyung Sung Shim
Hi.
Your code seems okay and it's no problem to register table on my
environment.
It might be your backend problem.

How about executing the following code?

*val distData = sc.parallelize(List( ("cat",2), ("cat", 5), ("mouse",
4),("cat", 12), ("dog", 12), ("mouse", 2)), 2).toDF("type", "age")*
*distData.registerTempTable("table")*


2016-04-29 23:55 GMT+09:00 :

> I am not using any extra context not sure why it is not getting the table.
>
>  //import org.apache.spark.sql.SQLContext
>  import org.apache.spark.sql._
>  //import sqlContext.implicits._
>
>  //val sqlContext = new SQLContext(sc)
>
>  case class CEStats(statstype: String, bootstrap: String, threshold :
> String, totalTypeCount : String, ratio : String, testCount : String)
>
>  val rawData =
> sc.textFile("/user/cloudera/zepplin/mergeStatisticsData.txt").map(_.split(",")).map(p
> => CEStats(p(0), p(1), p(2), p(3), p(4), p(5))).toDF().cache()
>
>  rawData.printSchema()
>  rawData.registerTempTable("test")
>
>  // This works fine.
>  sqlContext.sql("SELECT COUNT(DISTINCT bootstrap) FROM test ").show()
>
>  //import org.apache.spark.sql.SQLContext
>  import org.apache.spark.sql._
>  //import sqlContext.implicits._
>
>  //val sqlContext = new SQLContext(sc)
>
>  case class CEStats(statstype: String, bootstrap: String, threshold :
> String, totalTypeCount : String, ratio : String, testCount : String)
>
>  val rawData =
> sc.textFile("/user/cloudera/zepplin/mergeStatisticsData.txt").map(_.split(",")).map(p
> => CEStats(p(0), p(1), p(2), p(3), p(4), p(5))).toDF().cache()
>
>  rawData.printSchema()
>  rawData.registerTempTable("test")
>
>  // This works fine.
>  sqlContext.sql("SELECT COUNT(DISTINCT bootstrap) FROM test ").show()
>
>  //import org.apache.spark.sql.SQLContext
>  import org.apache.spark.sql._
>  //import sqlContext.implicits._
>
>  //val sqlContext = new SQLContext(sc)
>
>  case class CEStats(statstype: String, bootstrap: String, threshold :
> String, totalTypeCount : String, ratio : String, testCount : String)
>
>  val rawData =
> sc.textFile("/user/cloudera/zepplin/mergeStatisticsData.txt").map(_.split(",")).map(p
> => CEStats(p(0), p(1), p(2), p(3), p(4), p(5))).toDF().cache()
>
>  rawData.printSchema()
>  rawData.registerTempTable("test")
>
>  // This works fine.
>  sqlContext.sql("SELECT COUNT(DISTINCT bootstrap) FROM test ").show()
>
>
>
>
> *%sql select count(distinct bootstrap) from test
> org.apache.spark.sql.AnalysisException: no such table test; line 1 pos 38
> at
> org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
> at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:260)
> at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$7.applyOrElse(Analyzer.scala:268)
> at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$7.applyOrElse(Analyzer.scala:264)
> at
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:57)
> at
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:57)
> at
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51)
> at
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:56)
> at
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:54)
> at
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:54)
> *
> --
> *From:* Srivastava, Rachana (IP&Science)
> *Sent:* Friday, April 29, 2016 6:05 AM
> *To:* users@zeppelin.incubator.apache.org
> *Subject:* Zeppelin Cannot Find Registered Table
>
> I have a three lines program where I am registering a table and calling
> select * query. I know this is some issue with sqlcontext but I have not
> used any sqlcontext explicitly.
>
> *Following code works fine. I see my table registered:*
>
> case class CEStats(bootstrap: String, threshold : String, TP : String, FP
> : String, FN : String, TN : String, precision : String, recall : String)
>
> val cestats =
> sc.textFile("/user/cloudera/zepplin/mergeStatisticsTest1Combine.txt").map(_.split(",")).map(p
> => CEStats(p(0), p(1), p(2), p(3), p(4), p(5), p(6), p(7))).toDF()
>
> cestats.registerTempTable("table")
>
> sqlContext.tableNames().foreach(println)
>
> *But when i call %sql select * from table I am getting this exception*
>
> org.apache.spark.sql.AnalysisException: no such table table; line 0 pos 0
> at
> org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
> at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:260)
>


Re: Start Zeppelin automatically after Boot/Reboot

2016-04-28 Thread Hyung Sung Shim
Did you add your script to run level too?
and you should use absolute path in your script like
"/zeppelinhome/bin/zeppelin-daemon.sh start".

2016년 4월 28일 목요일, Panayotis Trapatsas님이 작성한
메시지:

>
> How to configure Zeppelin to start automatically after Boot/Reboot?
>
> I have tried adding the command "bin/zeppelin-daemon.sh start" in a init
> script in /etc/init.d but that didn't seem to work.
>
> I use Debian Jessie.
>
> --
>
> [image: e-Travel SA] 
>
> Panayotis Trapatsas / Lead Data Engineer
> p.trapat...@pamediakopes.gr
>  / +30
> 6979493921
>
> e-Travel SA
> Office: +30 213 0184000/ Fax: +30 211 8001181
> Leof. Kifisias 7, AmpelokipiAthens 115 23, Greece
> pamediakopes.gr |  trip.ru |  airtickets24.com |  fantasticgreece.com
>
> [image: Google +] [image:
> Stack Overflow] 
>


Re: Zeppelin hive + Kerberos

2016-04-27 Thread Hyung Sung Shim
If you are using spark interpreter, you can get the information 1.

[1]
https://zeppelin.incubator.apache.org/docs/0.6.0-incubating-SNAPSHOT/interpreter/spark.html

2016-04-27 17:09 GMT+09:00 Margus Roo :

> Hi
>
> Does Zeppelin (0.5.6) support Hive + Kerberos connection?
>
> --
> Margus (margusja) Roohttp://margus.roo.ee
> skype: margusja
> +372 51 48 780
>
>


Re: ZeppelinContext not found when SPARK_HOME is set.

2016-04-25 Thread Hyung Sung Shim
Hello.

If you want to use external library, use '--jars' option not '
--driver-class-path'  in the SPARK_SUBMIT_OPTIONS.

Thanks.

2016-04-26 9:24 GMT+09:00 Ydalia Delgado :

> Hi,
>
> I have set in conf/zeppelin-env.sh
>
> export SPARK_HOME="/home/username/libs/spark-1.6.1-bin-hadoop2.6/"
> export SPARK_SUBMIT_OPTIONS="--driver-memory 2G --executor-memory 4G
> --driver-class-path
> /home/username/libs/mysql-spark/mysql-connector-java-5.1.38-bin.jar"
>
> I have tried everything from
> https://mail-archives.apache.org/mod_mbox/incubator-zeppelin-users/201509.mbox/%3cCALf24saa=TW_uapCKju4Z=j+C=ey6nnvrshvjdhsy+fgzto...@mail.gmail.com%3e
> but it doesn't work.
>
> I need to connect to a MySQL DB using JDBC.  The only way I get it working
> is setting SPARK_HOME and SPARK_SUBMIT_OPTIONS. But then I get the
> following error:
>
> val inputValue = z.input("value")
> :27: error: not found: value z
>
> Thank you a lot for your help!
> Ydalia
>


Re: Struggling with Spark Packages in Pyspark

2016-04-17 Thread Hyung Sung Shim
Hello.

I think you missed the SPARK_HOME in zeppelin-env.sh and you can refer to
[1].

[1]
http://zeppelin.incubator.apache.org/docs/0.5.6-incubating/interpreter/spark.html

I hope this is help.

2016년 4월 17일 일요일, John Omernik님이 작성한 메시지:

> I am trying to use the databricks csv reader and have tried multiple ways
> to get this package available to pyspark. I have modified both
> spark-defaults.conf and zeppelin-env.sh (as stated below). I've included
> the spark-interpreter log from Zeppelin which seems to show it adding the
> jar properly.   Funny thing is, running pyspark at the command line works
> properly. I will say this, I am running Zeppelin (and thus Spark) in
> Docker, however, to ensure I did proper troubleshooting, I connected to the
> docker container (that was throwing the error in Zeppelin) and ran pyspark
> from within the container and it worked fine. The error only exists in
> Zeppelin.
>
> I would welcome any assistance.
>
> John
>
>
>
> *Error in Zeppelin:*
> Py4JJavaError: An error occurred while calling o82.load.
> : java.lang.ClassNotFoundException: Failed to find data source:
> com.databricks.spark.csv. Please find packages at
> http://spark-packages.org
> at
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
> at
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:102)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
> at py4j.Gateway.invoke(Gateway.java:259)
> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
> at py4j.commands.CallCommand.execute(CallCommand.java:79)
> at py4j.GatewayConnection.run(GatewayConnection.java:209)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException:
> com.databricks.spark.csv.DefaultSource
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$4$$anonfun$apply$1.apply(ResolvedDataSource.scala:62)
> at
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$4$$anonfun$apply$1.apply(ResolvedDataSource.scala:62)
> at scala.util.Try$.apply(Try.scala:161)
> at
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$4.apply(ResolvedDataSource.scala:62)
> at
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$$anonfun$4.apply(ResolvedDataSource.scala:62)
> at scala.util.Try.orElse(Try.scala:82)
> at
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:62)
> ... 14 more
> (, Py4JJavaError(u'An error occurred
> while calling o82.load.\n', JavaObject id=o83),  0x7f3776b36320>)
>
>
> *zeppelin-env.sh*
>
> export SPARK_SUBMIT_OPTIONS="--packages
> com.databricks:spark-csv_2.10:1.2.0"
>
> *spark-defaults.conf*
>
> spark.jars.packages com.databricks:spark-csv_2.10:1.2.0
> *Command I am running:*
>
> df = sqlContext.read.format('com.databricks.spark.csv').option('header',
> 'true').option('inferschema', 'true').option('mode',
> 'DROPMALFORMED').load('/user/test/airline/2016_ONTIME.csv')
>
>
>
> *spark interpreter log:*
>
> INFO [2016-04-17 11:45:59,335] ({pool-2-thread-2}
> Logging.scala[logInfo]:58) - Added JAR
> file:/home/test/.ivy2/jars/com.databricks_spark-csv_2.10-1.2.0.jar at
> http://192.168.0.95:59483/jars/com.databricks_spark-csv_2.10-1.2.0.jar
> with timestamp 1460893559334
>
>  INFO [2016-04-17 11:45:59,335] ({pool-2-thread-2}
> Logging.scala[logInfo]:58) - Added JAR
> file:/home/test/.ivy2/jars/org.apache.commons_commons-csv-1.1.jar at
> http://192.168.0.95:59483/jars/org.apache.commons_commons-csv-1.1.jar
> with timestamp 1460893559335
>
>  INFO [2016-04-17 11:45:59,336] ({pool-2-thread-2}
> Logging.scala[logInfo]:58) - Added JAR
> file:/home/test/.ivy2/jars/com.univocity_univocity-parsers-1.5.1.jar at
> http://192.168.0.95:59483/jars/com.univocity_univocity-parsers-1.5.1.jar
> with timestamp 1460893559336
>
>  INFO [2016-04-17 11:45:59,348] ({pool-2-thread-2}
> Logging.scala[logInfo]:58) - Added JAR
> file:/zeppelin/interpreter/spark/zeppelin-spark-0.6.0-incubating-SNAPSHOT.j

Re: Zeppelin on EMR doesn't start?

2016-04-14 Thread Hyung Sung Shim
Thank YOU for sharing the results.

2016-04-14 15:42 GMT+09:00 Chris Miller :

> Indeed, one of the notebooks JSON files was corrupt... not sure how it
> happened, but it wasn't an important one so I just deleted it.
>
> Thanks!!
>
>
> --
> Chris Miller
>
> On Mon, Apr 11, 2016 at 9:06 PM, Hyung Sung Shim 
> wrote:
>
>> Hello.
>> It seems like one of your notebook json file has problem.
>> Could you check your notebook files are valid? and What zeppelin and EMR
>> version are you using?
>>
>> 2016-04-11 20:41 GMT+09:00 Chris Miller :
>>
>>> Hi. I'm running Zeppelin on the latest EMR distribution. Recently the
>>> Zeppelin service will not start. I'm not sure what's going on... here is
>>> what I see in the logs:
>>>
>>> 
>>> WARN [2016-04-11 11:36:29,512] ({main}
>>> ZeppelinConfiguration.java[create]:95) - Failed to load configuration,
>>> proceeding with a default
>>>  INFO [2016-04-11 11:36:29,865] ({main} ZeppelinServer.java[main]:101) -
>>> Start zeppelin server
>>>  INFO [2016-04-11 11:36:29,870] ({main} Server.java[doStart]:272) -
>>> jetty-8.1.14.v20131031
>>>  INFO [2016-04-11 11:36:30,073] ({main}
>>> InterpreterFactory.java[init]:113) - Reading
>>> /usr/lib/zeppelin/interpreter/spark
>>>  INFO [2016-04-11 11:36:30,139] ({main}
>>> InterpreterFactory.java[init]:130) - Interpreter spark.spark found.
>>> class=org.apache.zeppelin.spark.SparkInterpreter
>>>  INFO [2016-04-11 11:36:30,144] ({main}
>>> InterpreterFactory.java[init]:130) - Interpreter spark.pyspark found.
>>> class=org.apache.zeppelin.spark.PySparkInterpreter
>>>  INFO [2016-04-11 11:36:30,146] ({main}
>>> InterpreterFactory.java[init]:130) - Interpreter spark.sql found.
>>> class=org.apache.zeppelin.spark.SparkSqlInterpreter
>>>  INFO [2016-04-11 11:36:30,149] ({main}
>>> InterpreterFactory.java[init]:130) - Interpreter spark.dep found.
>>> class=org.apache.zeppelin.spark.DepInterpreter
>>>  INFO [2016-04-11 11:36:30,171] ({main}
>>> InterpreterFactory.java[init]:113) - Reading
>>> /usr/lib/zeppelin/interpreter/sh
>>>  INFO [2016-04-11 11:36:30,177] ({main}
>>> InterpreterFactory.java[init]:130) - Interpreter sh.sh found.
>>> class=org.apache.zeppelin.shell.ShellInterpreter
>>>  INFO [2016-04-11 11:36:30,182] ({main}
>>> InterpreterFactory.java[init]:113) - Reading
>>> /usr/lib/zeppelin/interpreter/lens
>>>  INFO [2016-04-11 11:36:30,253] ({main}
>>> InterpreterFactory.java[init]:130) - Interpreter lens.lens found.
>>> class=org.apache.zeppelin.lens.LensInterpreter
>>>  INFO [2016-04-11 11:36:30,256] ({main}
>>> InterpreterFactory.java[init]:113) - Reading
>>> /usr/lib/zeppelin/interpreter/psql
>>>  INFO [2016-04-11 11:36:30,270] ({main}
>>> InterpreterFactory.java[init]:130) - Interpreter psql.sql found.
>>> class=org.apache.zeppelin.postgresql.PostgreSqlInterpreter
>>>  INFO [2016-04-11 11:36:30,270] ({main}
>>> InterpreterFactory.java[init]:113) - Reading
>>> /usr/lib/zeppelin/interpreter/hive
>>>  INFO [2016-04-11 11:36:30,322] ({main}
>>> InterpreterFactory.java[init]:130) - Interpreter hive.hql found.
>>> class=org.apache.zeppelin.hive.HiveInterpreter
>>>  INFO [2016-04-11 11:36:30,329] ({main}
>>> InterpreterFactory.java[init]:113) - Reading
>>> /usr/lib/zeppelin/interpreter/kylin
>>>  INFO [2016-04-11 11:36:30,342] ({main}
>>> InterpreterFactory.java[init]:130) - Interpreter kylin.kylin found.
>>> class=org.apache.zeppelin.kylin.KylinInterpreter
>>>  INFO [2016-04-11 11:36:30,343] ({main}
>>> InterpreterFactory.java[init]:113) - Reading
>>> /usr/lib/zeppelin/interpreter/cassandra
>>>  INFO [2016-04-11 11:36:30,361] ({main}
>>> CassandraInterpreter.java[]:154) - Bootstrapping Cassandra
>>> Interpreter
>>>  INFO [2016-04-11 11:36:30,362] ({main}
>>> InterpreterFactory.java[init]:130) - Interpreter cassandra.cassandra found.
>>> class=org.apache.zeppelin.cassandra.CassandraInterpreter
>>>  INFO [2016-04-11 11:36:30,363] ({main}
>>> InterpreterFactory.java[init]:113) - Reading
>>> /usr/lib/zeppelin/interpreter/tajo
>>>  INFO [2016-04-11 11:36:30,376] ({main}
>>> InterpreterFactory.java[init]:130) - Interpreter tajo.tql found.
>>> class=org.apache.zeppelin.tajo.TajoInterpreter
>>>  INFO [2016-04-11 11:36:30,380] ({main}
>>> InterpreterFactory.java[init]:1

Re: Zeppelin on EMR doesn't start?

2016-04-11 Thread Hyung Sung Shim
Hello.
It seems like one of your notebook json file has problem.
Could you check your notebook files are valid? and What zeppelin and EMR
version are you using?

2016-04-11 20:41 GMT+09:00 Chris Miller :

> Hi. I'm running Zeppelin on the latest EMR distribution. Recently the
> Zeppelin service will not start. I'm not sure what's going on... here is
> what I see in the logs:
>
> 
> WARN [2016-04-11 11:36:29,512] ({main}
> ZeppelinConfiguration.java[create]:95) - Failed to load configuration,
> proceeding with a default
>  INFO [2016-04-11 11:36:29,865] ({main} ZeppelinServer.java[main]:101) -
> Start zeppelin server
>  INFO [2016-04-11 11:36:29,870] ({main} Server.java[doStart]:272) -
> jetty-8.1.14.v20131031
>  INFO [2016-04-11 11:36:30,073] ({main} InterpreterFactory.java[init]:113)
> - Reading /usr/lib/zeppelin/interpreter/spark
>  INFO [2016-04-11 11:36:30,139] ({main} InterpreterFactory.java[init]:130)
> - Interpreter spark.spark found.
> class=org.apache.zeppelin.spark.SparkInterpreter
>  INFO [2016-04-11 11:36:30,144] ({main} InterpreterFactory.java[init]:130)
> - Interpreter spark.pyspark found.
> class=org.apache.zeppelin.spark.PySparkInterpreter
>  INFO [2016-04-11 11:36:30,146] ({main} InterpreterFactory.java[init]:130)
> - Interpreter spark.sql found.
> class=org.apache.zeppelin.spark.SparkSqlInterpreter
>  INFO [2016-04-11 11:36:30,149] ({main} InterpreterFactory.java[init]:130)
> - Interpreter spark.dep found.
> class=org.apache.zeppelin.spark.DepInterpreter
>  INFO [2016-04-11 11:36:30,171] ({main} InterpreterFactory.java[init]:113)
> - Reading /usr/lib/zeppelin/interpreter/sh
>  INFO [2016-04-11 11:36:30,177] ({main} InterpreterFactory.java[init]:130)
> - Interpreter sh.sh found. class=org.apache.zeppelin.shell.ShellInterpreter
>  INFO [2016-04-11 11:36:30,182] ({main} InterpreterFactory.java[init]:113)
> - Reading /usr/lib/zeppelin/interpreter/lens
>  INFO [2016-04-11 11:36:30,253] ({main} InterpreterFactory.java[init]:130)
> - Interpreter lens.lens found.
> class=org.apache.zeppelin.lens.LensInterpreter
>  INFO [2016-04-11 11:36:30,256] ({main} InterpreterFactory.java[init]:113)
> - Reading /usr/lib/zeppelin/interpreter/psql
>  INFO [2016-04-11 11:36:30,270] ({main} InterpreterFactory.java[init]:130)
> - Interpreter psql.sql found.
> class=org.apache.zeppelin.postgresql.PostgreSqlInterpreter
>  INFO [2016-04-11 11:36:30,270] ({main} InterpreterFactory.java[init]:113)
> - Reading /usr/lib/zeppelin/interpreter/hive
>  INFO [2016-04-11 11:36:30,322] ({main} InterpreterFactory.java[init]:130)
> - Interpreter hive.hql found. class=org.apache.zeppelin.hive.HiveInterpreter
>  INFO [2016-04-11 11:36:30,329] ({main} InterpreterFactory.java[init]:113)
> - Reading /usr/lib/zeppelin/interpreter/kylin
>  INFO [2016-04-11 11:36:30,342] ({main} InterpreterFactory.java[init]:130)
> - Interpreter kylin.kylin found.
> class=org.apache.zeppelin.kylin.KylinInterpreter
>  INFO [2016-04-11 11:36:30,343] ({main} InterpreterFactory.java[init]:113)
> - Reading /usr/lib/zeppelin/interpreter/cassandra
>  INFO [2016-04-11 11:36:30,361] ({main}
> CassandraInterpreter.java[]:154) - Bootstrapping Cassandra
> Interpreter
>  INFO [2016-04-11 11:36:30,362] ({main} InterpreterFactory.java[init]:130)
> - Interpreter cassandra.cassandra found.
> class=org.apache.zeppelin.cassandra.CassandraInterpreter
>  INFO [2016-04-11 11:36:30,363] ({main} InterpreterFactory.java[init]:113)
> - Reading /usr/lib/zeppelin/interpreter/tajo
>  INFO [2016-04-11 11:36:30,376] ({main} InterpreterFactory.java[init]:130)
> - Interpreter tajo.tql found. class=org.apache.zeppelin.tajo.TajoInterpreter
>  INFO [2016-04-11 11:36:30,380] ({main} InterpreterFactory.java[init]:113)
> - Reading /usr/lib/zeppelin/interpreter/md
>  INFO [2016-04-11 11:36:30,385] ({main} InterpreterFactory.java[init]:130)
> - Interpreter md.md found. class=org.apache.zeppelin.markdown.Markdown
>  INFO [2016-04-11 11:36:30,392] ({main} InterpreterFactory.java[init]:113)
> - Reading /usr/lib/zeppelin/interpreter/flink
>  INFO [2016-04-11 11:36:30,443] ({main} InterpreterFactory.java[init]:130)
> - Interpreter flink.flink found.
> class=org.apache.zeppelin.flink.FlinkInterpreter
>  INFO [2016-04-11 11:36:30,446] ({main} InterpreterFactory.java[init]:113)
> - Reading /usr/lib/zeppelin/interpreter/angular
>  INFO [2016-04-11 11:36:30,451] ({main} InterpreterFactory.java[init]:130)
> - Interpreter angular.angular found.
> class=org.apache.zeppelin.angular.AngularInterpreter
>  INFO [2016-04-11 11:36:30,456] ({main} InterpreterFactory.java[init]:113)
> - Reading /usr/lib/zeppelin/interpreter/ignite
>  INFO [2016-04-11 11:36:30,488] ({main} InterpreterFactory.java[init]:130)
> - Interpreter ignite.ignite found.
> class=org.apache.zeppelin.ignite.IgniteInterpreter
>  INFO [2016-04-11 11:36:30,490] ({main} InterpreterFactory.java[init]:130)
> - Interpreter ignite.ignitesql found.
> class=org.apache.zeppelin.ignite.IgniteSqlInterpreter
>  INFO [2016-04-11 11:36:30,49

Re: Cloudera-CDH Help

2016-04-08 Thread Hyung Sung Shim
hello.
It seems like no more resource to run zeppelin application on yarn.
Could you check the running jobs on yarn?

2016년 4월 9일 토요일, Skilton, Jeffrey님이 작성한 메시지:

> Works
>
> All the shell and local commands work. And there are no firewalls , all
> boxes are grouped on the same subnet, all ipchains off, not firewalls at
> all, all boxes on the same subnet.
>
>
>
> At the command line it all works.
>
>
>
> Jeff
>
>
>
>
>
> *From:* MUEDSAM, JOHAN [mailto:jm8...@att.com
> ]
> *Sent:* Friday, April 08, 2016 4:14 PM
> *To:* users@zeppelin.incubator.apache.org
> 
> *Subject:* Re: Cloudera-CDH Help
> *Importance:* High
>
>
>
> Does spark work outside of Zeppelin? I’ve seen those symptoms when a
> firewall blocks the executors from “calling home” to the driver, basically
> keeping the driver waiting for them to connect until it times out. Make
> sure the data nodes in your cluster can connect to the server where the
> driver resides.
>
>
>
> *From: *"Skilton, Jeffrey"
> *Reply-To: *"users@zeppelin.incubator.apache.org
> "
> *Date: *Friday, April 8, 2016 at 3:06 PM
> *To: *"users@zeppelin.incubator.apache.org
> "
> *Subject: *Cloudera-CDH Help
>
>
>
> We have a CDH stack, 5.6.0 and also use CASK/CDAP
>
>
>
> We have installed Zeppelin on a dedicated box and/or on one of our noes in
> the CDH and installed all the CDH gateways so we have all the configs, and
> parcels loaded to our node.
>
>
>
> It works , built with maven, launched and created notebooks. (love the S3
> bucket).
>
> %hive works
>
> %hbase also seems to work
>
> %hdfs is now missing or maybe it is renamed or part of %file now
>
> %file we configured and pointed at the CDH node and port
> for the webfs and it looks like it should work, but HDFS simple ls, pwd,
> does NOT work.
>
>
>
> The part I need help with.
>
> %spark, and the python and anything using spark, it spears to submit the
> job to the CDH stack and we see it in resource manager as accepted but it
> just never ends. It just stays as accepted and nothing happens, if we kill
> it, it spins up again, until it hits the retry limit and fails.
>
>
>
> I have attached the interpreter screen and some logs.
>
>
>
> We configured the .sh file with
>
>
>
> export ZEPPELIN_PORT=8181
>
> export JAVA_HOME=/usr/java/jdk1.8.0_77/jre
>
> export MASTER=yarn-client
>
> export
> SPARK_HOME=/opt/cloudera/parcels/CDH-5.6.0-1.cdh5.6.0.p0.45/lib/spark
>
> export
> HADOOP_HOME=/opt/cloudera/parcels/CDH-5.6.0-1.cdh5.6.0.p0.45/lib/hadoop
>
> export HADOOP_CONF_DIR=/etc/hadoop/conf:/etc/hbase/conf:/etc/hive/conf
>
> export
> HBASE_HOME=/opt/cloudera/parcels/CDH-5.6.0-1.cdh5.6.0.p0.45/lib/hbase
>
> export PYSPARK_PYTHON=/usr/bin/python2.7
>
>
>
>
>
> INFO [2016-04-08 18:13:43,093] ({Thread-0}
> RemoteInterpreterServer.java[run]:84) - Starting remote interpreter server
> on port 49781
>
> INFO [2016-04-08 18:13:43,544] ({pool-1-thread-3}
> RemoteInterpreterServer.java[createInterpreter]:172) - Instantiate
> interpreter org.apache.zeppelin.spark.PySparkInterpreter
>
> INFO [2016-04-08 18:13:43,595] ({pool-1-thread-3}
> RemoteInterpreterServer.java[createInterpreter]:172) - Instantiate
> interpreter org.apache.zeppelin.spark.SparkInterpreter
>
> INFO [2016-04-08 18:13:43,599] ({pool-1-thread-3}
> RemoteInterpreterServer.java[createInterpreter]:172) - Instantiate
> interpreter org.apache.zeppelin.spark.SparkSqlInterpreter
>
> INFO [2016-04-08 18:13:43,601] ({pool-1-thread-3}
> RemoteInterpreterServer.java[createInterpreter]:172) - Instantiate
> interpreter org.apache.zeppelin.spark.DepInterpreter
>
> INFO [2016-04-08 18:13:43,631] ({pool-2-thread-2}
> SchedulerFactory.java[jobStarted]:131) - Job
> remoteInterpretJob_1460139223629 started by scheduler interpreter_896707314
>
> INFO [2016-04-08 18:13:43,634] ({pool-2-thread-2}
> PySparkInterpreter.java[createPythonScript]:115) - File
> /tmp/zeppelin_pyspark.py created
>
> INFO [2016-04-08 18:13:45,208] ({pool-2-thread-2}
> Logging.scala[logInfo]:59) - Changing view acls to: zeppelin
>
> INFO [2016-04-08 18:13:45,209] ({pool-2-thread-2}
> Logging.scala[logInfo]:59) - Changing modify acls to: zeppelin
>
> INFO [2016-04-08 18:13:45,210] ({pool-2-thread-2}
> Logging.scala[logInfo]:59) - SecurityManager: authentication disabled; ui
> acls disabled; users with view permissions: Set(zepp
>
> elin); users with modify permissions: Set(zeppelin)
>
> INFO [2016-04-08 18:13:45,568] ({pool-2-thread-2}
> Logging.scala[logInfo]:59) - Starting HTTP Server
>
> INFO [2016-04-08 18:13:45,697] ({pool-2-thread-2}
> Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
>
> INFO [2016-04-08 18:13:45,715] ({pool-2-thread-2}
> AbstractConnector.java[doStart]:338) - Started SocketConnector@0.0.0.0
> :49230
>
> INFO [2016-04-08 18:13:45,716] ({pool-2-thread-2}
> Logging.scala[logInfo]:59) - Successfully started service 'HTTP class
> server' on port 49230.
>
> INFO [2016-04-08 18:13:49,338] ({pool-2-thread-2}
> Logging.scala[logInfo]:59) - Running Spark version 1.5.0-cdh5.6.0
>
> INFO [2

Re: Zeppelin 0.5.6 Disconnected

2016-04-04 Thread Hyung Sung Shim
Hello.
I just did test same with your environment and I got success working.
All your configuration seems good except the
"zeppelin.server.allowed.origins" config.
Can you try to set the "zeppelin.server.allowed.origins" value to "*" like
the following?

*# conf/zeppelin-site.xml*
**
*  zeppelin.server.allowed.origins*

*  **
*  Allowed sources for REST and WebSocket requests (i.e.
http://onehost:8080 <http://onehost:8080>,http://otherhost.com
<http://otherhost.com>). If you leave * yo*
**

Please share me know the result.
Thanks.


2016-04-05 1:34 GMT+09:00 Hyung Sung Shim :

> Hello.
> I think the "Disconnected" problem is the websocket connection problem.
>
> The *"zeppelin.ssl.keystore.path"* should be under
> "~/$ZEPPELIN_HOME/conf/" path.
>
> If it's still not working, can you send the zeppelin log files that is
> under the "~/$ZEPPELIN_HOME/logs" ?
>
> Thanks.
>
>
> 2016-04-05 0:05 GMT+09:00 Oren Shani :
>
>> Hi,
>>
>>
>>
>> Nginx should be configured OK now, but still disconnected.
>>
>>
>>
>> Could it be something else? For example, did I have to install Spark
>> and/or Hadoop?
>>
>>
>>
>> Oren
>>
>>
>>
>> *From:* astros...@gmail.com [mailto:astros...@gmail.com] *On Behalf Of *Hyung
>> Sung Shim
>> *Sent:* Monday, April 4, 2016 5:17 PM
>>
>> *To:* users@zeppelin.incubator.apache.org
>> *Subject:* Re: Zeppelin 0.5.6 Disconnected
>>
>>
>>
>> hi.
>>
>> You could refer to https://github.com/apache/incubator-zeppelin/pull/814.
>>
>> I just tested on Ubuntu 14.04.3 and it's working well.
>>
>> Thanks.
>>
>>
>>
>>
>>
>> 2016-04-04 21:08 GMT+09:00 Oren Shani :
>>
>> Sung Shim,
>>
>>
>>
>>
>>
>> I wasn't sure what  http://backend and http://backendWS should refer to
>> so I defined both as upstream to localhost:8080
>>
>>
>>
>> Also when I defined the following:
>>
>>
>>
>> proxy_set_header X-Real-IP   $proxy_protocol_addr;
>>
>> proxy_set_header X-Forwarded-For $proxy_protocol_addr;
>>
>>
>>
>>
>>
>> Nginx complained that it doesn't know the variable proxy_protocol_addr so
>> I omitted these lines.
>>
>>
>>
>> So now I have the attached nginx config but it doesn't seem to help –
>> Zeppelin is still disconnected
>>
>>
>>
>> Thanks for your help,
>>
>>
>>
>> Oren
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *From:* astros...@gmail.com [mailto:astros...@gmail.com] *On Behalf Of *Hyung
>> Sung Shim
>> *Sent:* Monday, April 4, 2016 2:03 PM
>> *To:* users@zeppelin.incubator.apache.org
>> *Subject:* Re: Zeppelin 0.5.6 Disconnected
>>
>>
>>
>> Hello.
>>
>> Since 0.5.5, Zeppelin uses single port for REST and websocket.
>>
>> so Could you try to use same port like follow Nginx configuration?
>>
>>
>>
>> server {
>>
>>   listen 80 ;
>>
>>   server_name _;
>>
>>
>>
>>   location / {
>>
>> proxy_set_header Host$host;
>>
>> proxy_set_header X-Real-IP   $proxy_protocol_addr;
>>
>> proxy_set_header X-Forwarded-For $proxy_protocol_addr;
>>
>>
>>
>> proxy_pass  http://backend;
>>
>>   }
>>
>>
>>
>>   location /ws {
>>
>> proxy_http_version 1.1;
>>
>> proxy_set_header Upgrade $http_upgrade;
>>
>> proxy_set_header Connection "upgrade";
>>
>>
>>
>> proxy_set_header   X-Real-IP $remote_addr;
>>
>> proxy_pass  http://backendWS <http://backendws/>;
>>
>>   }
>>
>> }
>>
>>
>>
>> I am fixing the documentation.
>>
>> Thanks.
>>
>>
>>
>>
>>
>> 2016-04-04 17:24 GMT+09:00 Oren Shani :
>>
>> Hello,
>>
>>
>>
>> I installed Zeppelin 0.5.6 on Ubuntu server 14.04, behind Nginx. Zeppelin
>> comes up but shows "Disconnected" in the top right corner.
>>
>>
>>
>> I found several references to a similar problem on the web and most of
>> them  suggest that the problem has to do with access to Zeppelin's winsock
>> port (8081).
>>
>>
>>
>> The thing is that Zeppelin on my server does not listen on 8081 but only
>> on 8080. Also I ran pcap on both the server side and my PC's side and never
>> saw an attempt to access port 8081 .
>>
>>
>>
>> So could it be something else? Maybe I am missing something in my config
>> files?
>>
>>
>>
>> You can see my zeppelin-site.xml and the relevant lines from my nginx
>> config files attached.
>>
>>
>>
>> Many thanks,
>>
>>
>>
>> Oren
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
>


Re: Zeppelin 0.5.6 Disconnected

2016-04-04 Thread Hyung Sung Shim
Hello.
I think the "Disconnected" problem is the websocket connection problem.

The *"zeppelin.ssl.keystore.path"* should be under "~/$ZEPPELIN_HOME/conf/"
path.

If it's still not working, can you send the zeppelin log files that is
under the "~/$ZEPPELIN_HOME/logs" ?

Thanks.


2016-04-05 0:05 GMT+09:00 Oren Shani :

> Hi,
>
>
>
> Nginx should be configured OK now, but still disconnected.
>
>
>
> Could it be something else? For example, did I have to install Spark
> and/or Hadoop?
>
>
>
> Oren
>
>
>
> *From:* astros...@gmail.com [mailto:astros...@gmail.com] *On Behalf Of *Hyung
> Sung Shim
> *Sent:* Monday, April 4, 2016 5:17 PM
>
> *To:* users@zeppelin.incubator.apache.org
> *Subject:* Re: Zeppelin 0.5.6 Disconnected
>
>
>
> hi.
>
> You could refer to https://github.com/apache/incubator-zeppelin/pull/814.
>
> I just tested on Ubuntu 14.04.3 and it's working well.
>
> Thanks.
>
>
>
>
>
> 2016-04-04 21:08 GMT+09:00 Oren Shani :
>
> Sung Shim,
>
>
>
>
>
> I wasn't sure what  http://backend and http://backendWS should refer to
> so I defined both as upstream to localhost:8080
>
>
>
> Also when I defined the following:
>
>
>
> proxy_set_header X-Real-IP   $proxy_protocol_addr;
>
> proxy_set_header X-Forwarded-For $proxy_protocol_addr;
>
>
>
>
>
> Nginx complained that it doesn't know the variable proxy_protocol_addr so
> I omitted these lines.
>
>
>
> So now I have the attached nginx config but it doesn't seem to help –
> Zeppelin is still disconnected
>
>
>
> Thanks for your help,
>
>
>
> Oren
>
>
>
>
>
>
>
>
>
> *From:* astros...@gmail.com [mailto:astros...@gmail.com] *On Behalf Of *Hyung
> Sung Shim
> *Sent:* Monday, April 4, 2016 2:03 PM
> *To:* users@zeppelin.incubator.apache.org
> *Subject:* Re: Zeppelin 0.5.6 Disconnected
>
>
>
> Hello.
>
> Since 0.5.5, Zeppelin uses single port for REST and websocket.
>
> so Could you try to use same port like follow Nginx configuration?
>
>
>
> server {
>
>   listen 80 ;
>
>   server_name _;
>
>
>
>   location / {
>
> proxy_set_header Host$host;
>
> proxy_set_header X-Real-IP   $proxy_protocol_addr;
>
> proxy_set_header X-Forwarded-For $proxy_protocol_addr;
>
>
>
> proxy_pass  http://backend;
>
>   }
>
>
>
>   location /ws {
>
> proxy_http_version 1.1;
>
> proxy_set_header Upgrade $http_upgrade;
>
> proxy_set_header Connection "upgrade";
>
>
>
> proxy_set_header   X-Real-IP $remote_addr;
>
> proxy_pass  http://backendWS <http://backendws/>;
>
>   }
>
> }
>
>
>
> I am fixing the documentation.
>
> Thanks.
>
>
>
>
>
> 2016-04-04 17:24 GMT+09:00 Oren Shani :
>
> Hello,
>
>
>
> I installed Zeppelin 0.5.6 on Ubuntu server 14.04, behind Nginx. Zeppelin
> comes up but shows "Disconnected" in the top right corner.
>
>
>
> I found several references to a similar problem on the web and most of
> them  suggest that the problem has to do with access to Zeppelin's winsock
> port (8081).
>
>
>
> The thing is that Zeppelin on my server does not listen on 8081 but only
> on 8080. Also I ran pcap on both the server side and my PC's side and never
> saw an attempt to access port 8081 .
>
>
>
> So could it be something else? Maybe I am missing something in my config
> files?
>
>
>
> You can see my zeppelin-site.xml and the relevant lines from my nginx
> config files attached.
>
>
>
> Many thanks,
>
>
>
> Oren
>
>
>
>
>
>
>
>
>


Re: Zeppelin 0.5.6 Disconnected

2016-04-04 Thread Hyung Sung Shim
hi.
You could refer to https://github.com/apache/incubator-zeppelin/pull/814.
I just tested on Ubuntu 14.04.3 and it's working well.
Thanks.


2016-04-04 21:08 GMT+09:00 Oren Shani :

> Sung Shim,
>
>
>
>
>
> I wasn't sure what  http://backend and http://backendWS should refer to
> so I defined both as upstream to localhost:8080
>
>
>
> Also when I defined the following:
>
>
>
> proxy_set_header X-Real-IP   $proxy_protocol_addr;
>
> proxy_set_header X-Forwarded-For $proxy_protocol_addr;
>
>
>
>
>
> Nginx complained that it doesn't know the variable proxy_protocol_addr so
> I omitted these lines.
>
>
>
> So now I have the attached nginx config but it doesn't seem to help –
> Zeppelin is still disconnected
>
>
>
> Thanks for your help,
>
>
>
> Oren
>
>
>
>
>
>
>
>
>
> *From:* astros...@gmail.com [mailto:astros...@gmail.com] *On Behalf Of *Hyung
> Sung Shim
> *Sent:* Monday, April 4, 2016 2:03 PM
> *To:* users@zeppelin.incubator.apache.org
> *Subject:* Re: Zeppelin 0.5.6 Disconnected
>
>
>
> Hello.
>
> Since 0.5.5, Zeppelin uses single port for REST and websocket.
>
> so Could you try to use same port like follow Nginx configuration?
>
>
>
> server {
>
>   listen 80 ;
>
>   server_name _;
>
>
>
>   location / {
>
> proxy_set_header Host$host;
>
> proxy_set_header X-Real-IP   $proxy_protocol_addr;
>
> proxy_set_header X-Forwarded-For $proxy_protocol_addr;
>
>
>
> proxy_pass  http://backend;
>
>   }
>
>
>
>   location /ws {
>
> proxy_http_version 1.1;
>
> proxy_set_header Upgrade $http_upgrade;
>
> proxy_set_header Connection "upgrade";
>
>
>
> proxy_set_header   X-Real-IP $remote_addr;
>
> proxy_pass  http://backendWS <http://backendws/>;
>
>   }
>
> }
>
>
>
> I am fixing the documentation.
>
> Thanks.
>
>
>
>
>
> 2016-04-04 17:24 GMT+09:00 Oren Shani :
>
> Hello,
>
>
>
> I installed Zeppelin 0.5.6 on Ubuntu server 14.04, behind Nginx. Zeppelin
> comes up but shows "Disconnected" in the top right corner.
>
>
>
> I found several references to a similar problem on the web and most of
> them  suggest that the problem has to do with access to Zeppelin's winsock
> port (8081).
>
>
>
> The thing is that Zeppelin on my server does not listen on 8081 but only
> on 8080. Also I ran pcap on both the server side and my PC's side and never
> saw an attempt to access port 8081 .
>
>
>
> So could it be something else? Maybe I am missing something in my config
> files?
>
>
>
> You can see my zeppelin-site.xml and the relevant lines from my nginx
> config files attached.
>
>
>
> Many thanks,
>
>
>
> Oren
>
>
>
>
>
>
>


Re: Zeppelin 0.5.6 Disconnected

2016-04-04 Thread Hyung Sung Shim
Hello.
Since 0.5.5, Zeppelin uses single port for REST and websocket.
so Could you try to use same port like follow Nginx configuration?

server {
  listen 80 ;
  server_name _;

  location / {
proxy_set_header Host$host;
proxy_set_header X-Real-IP   $proxy_protocol_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;

proxy_pass  http://backend;
  }

  location /ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

proxy_set_header   X-Real-IP $remote_addr;
proxy_pass  http://backendWS ;
  }
}

I am fixing the documentation.
Thanks.


2016-04-04 17:24 GMT+09:00 Oren Shani :

> Hello,
>
>
>
> I installed Zeppelin 0.5.6 on Ubuntu server 14.04, behind Nginx. Zeppelin
> comes up but shows "Disconnected" in the top right corner.
>
>
>
> I found several references to a similar problem on the web and most of
> them  suggest that the problem has to do with access to Zeppelin's winsock
> port (8081).
>
>
>
> The thing is that Zeppelin on my server does not listen on 8081 but only
> on 8080. Also I ran pcap on both the server side and my PC's side and never
> saw an attempt to access port 8081 .
>
>
>
> So could it be something else? Maybe I am missing something in my config
> files?
>
>
>
> You can see my zeppelin-site.xml and the relevant lines from my nginx
> config files attached.
>
>
>
> Many thanks,
>
>
>
> Oren
>
>
>
>
>


Re: Reverse Proxying Zeppelin 0.5.6 with NGINX

2016-03-24 Thread Hyung Sung Shim
Hello.

I think document should be updated.
And Could you try following nginx configuration for proxy ?
Thanks.

server {
  listen 80 ;
  server_name _;

  location / {
proxy_set_header Host$host;
proxy_set_header X-Real-IP   $proxy_protocol_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;

proxy_pass  http://backend;
  }

  location /ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

proxy_set_header   X-Real-IP $remote_addr;
proxy_pass  http://backendWS;
  }
}



2016-03-24 22:20 GMT+09:00 Max Bridgewater :

> Jesan,
>
> Thanks for the link. I changed my configuration to match the one in that
> link. But still doesn't work. I am pasting my config below. The other thing
> that I find weird is that since 0.5.5, Zeppelin uses single port for REST
> and websocket  (see https://issues.apache.org/jira/browse/ZEPPELIN-172).
> Yet, the 0.6.0 security page you pointed at, uses two different ports; one
> for REST and one for Websocket.  So, I am wondering if the documentation is
> really uptodate.
>
> Thanks,
> Max.
>
>
> upstream zeppelin {
> server localhost:8080;
> }
>
> upstream zeppelin-wss {
> server localhost:8081;
> }
>
> server {
> listen 8010;
> server_name localhost;
>
>
> location / {
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> proxy_set_header Host $http_host;
> proxy_set_header X-NginX-Proxy true;
> proxy_pass http://zeppelin;
> proxy_redirect off;
> }
> }
>
> server {
> listen 8011;
> server_name localhost;
>
> location / {
> proxy_pass http://zeppelin-wss;
> proxy_http_version 1.1;
> proxy_set_header Upgrade websocket;
> proxy_set_header Connection upgrade;
> proxy_read_timeout 86400;
> }
> }
>
>
>
> On Thu, Mar 24, 2016 at 8:00 AM, Jesang Yoon 
> wrote:
>
>> Max,
>>
>> You can check out NGINX related settings with Zeppelin at document here:
>>
>>
>> http://zeppelin.incubator.apache.org/docs/0.6.0-incubating-SNAPSHOT/security/authentication.html
>>
>>
>>
>> I hope this will help :)
>>
>>
>>
>>
>>
>> -Original Message-
>> *From:* "Max Bridgewater"
>> *To:* ;
>> *Cc:*
>> *Sent:* 2016-03-24 (목) 20:30:13
>> *Subject:* Reverse Proxying Zeppelin 0.5.6 with NGINX
>>
>> I am trying to reverse proxy Zeppelin 0.5.6 using NGINX. My config is
>> below. I get the to the Zeppelin UI but I am in disconnected mode. I cannot
>> create notes and I don't see the Zeppelin tutorial note.
>>
>> Any suggestion?
>>
>> Thanks,
>> Max
>>
>>
>>   server localhost:8080;
>> }
>> server {
>> listen 80;
>> server_name localhost;
>> location / {
>> proxy_set_header Upgrade $http_upgrade;
>> proxy_set_header Connection "upgrade";
>> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
>> proxy_set_header   X-Real-IP $remote_addr;
>> proxy_set_header   Host  $http_host;
>> proxy_pass  http://backendWS;
>> }
>> }
>>
>>
>>
>
>


Re: Zeppelin 0.5.6 build failed

2016-03-20 Thread Hyung Sung Shim
Hello.
It's good to hear that you solved issue and thanks for sharing you solution.


2016년 3월 18일 금요일, Ruben Boada님이 작성한 메시지:

> Hi,
>
> I've just solved this issue. The problem was the configuration of proxy.
> The process that I've followed is:
>
> - delete /zeppelin-web/node_modules/*
> - edit /zeppelin-web/.bowerrc to add proxy configs
> - edit /zeppelin-web/Gruntfile.js to add proxy configs
> - go to /zeppelin-web and npm install
> - mvn clear package .
>
> Thanks for helping me with this issue, I was in a trouble with it.
>
>
> El 17/03/16 a les 17:26, Hyung Sung Shim ha escrit:
>
> Hello.
> My test environments is
> - CentOS6.6
> - maven 3.3.3
> - JDK 1.7.0_67
> - node v0.10.42
> but I couldn't reproduce your problem.
> Did you build as a non-root user?
>
>
> 2016-03-17 21:36 GMT+09:00 Ruben Boada  >:
>
>> Hi,
>>
>> First I've tried this:
>>
>> mvn clean package -Pspark-1.6 -Phadoop-2.6
>>
>> Now I've added "-DskipTests" and fails in another point:
>>
>> [ERROR] npm WARN unmet dependency
>> /opt/incubator-zeppelin/zeppelin-web/node_modules/bower/node_modules/bower-config
>> requires mout@'>=0.9.0 <1.0.0' but will load
>>
>> [INFO] Running 'bower --allow-root install' in
>> /opt/incubator-zeppelin/zeppelin-web
>> [ERROR]
>> [ERROR] module.js:340
>> [ERROR] throw err;
>> [ERROR]   ^
>> [ERROR] Error: Cannot find module 'q'
>> [ERROR] at Function.Module._resolveFilename (module.js:338:15)
>> [ERROR] at Function.Module._load (module.js:280:25)
>> [ERROR] at Module.require (module.js:364:17)
>> [ERROR] at require (module.js:380:17)
>> [ERROR] at Object.
>> (/opt/incubator-zeppelin/zeppelin-web/node_modules/bower/bin/bower:6:9)
>> [ERROR] at Module._compile (module.js:456:26)
>> [ERROR] at Object.Module._extensions..js (module.js:474:10)
>> [ERROR] at Module.load (module.js:356:32)
>> [ERROR] at Function.Module._load (module.js:312:12)
>> [ERROR] at Function.Module.runMain (module.js:497:10)
>>
>>
>>
>> [ERROR] Failed to execute goal
>> com.github.eirslett:frontend-maven-plugin:0.0.25:bower (bower install) on
>> project zeppelin-web: Failed to run task: 'bower --allow-root install'
>> failed. (error code 8) -> [Help 1]
>>
>>
>>
>> El 17/03/16 a les 13:11, Hyung Sung Shim ha escrit:
>>
>> Hello.
>> Could you share your build command too?
>>
>> 2016년 3월 17일 목요일, Ruben Boada<
>> 
>> ruben.bo...@csuc.cat
>> > 님이 작성한 메시지:
>>
>>> Hi all,
>>>
>>> I'm trying to build Zeppelin 0.5.6 and the process fails in Zeppelin
>>> Interpreter. The error messages are:
>>>
>>>
>>> ERROR org.apache.zeppelin.scheduler.Job:184 - Job failed
>>> org.apache.zeppelin.interpreter.InterpreterException:
>>> java.lang.NumberFormatException: For input string: "non numeric value"
>>> at
>>> org.apache.zeppelin.interpreter.remote.mock.MockInterpreterA.interpret(MockInterpreterA.java:68)
>>> at
>>> org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
>>> at
>>> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
>>> at
>>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:331)
>>> at org.apache.zeppelin.scheduler.Job.run(Job.java:171)
>>> at
>>> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.lang.NumberFormatException: For input string: "non
>>> numeric value"
>>> at
>>> java.lang.NumberFormatException.forInput

Re: Zeppelin 0.5.6 build failed

2016-03-19 Thread Hyung Sung Shim
Hello.
My test environments is
- CentOS6.6
- maven 3.3.3
- JDK 1.7.0_67
- node v0.10.42
but I couldn't reproduce your problem.
Did you build as a non-root user?


2016-03-17 21:36 GMT+09:00 Ruben Boada :

> Hi,
>
> First I've tried this:
>
> mvn clean package -Pspark-1.6 -Phadoop-2.6
>
> Now I've added "-DskipTests" and fails in another point:
>
> [ERROR] npm WARN unmet dependency
> /opt/incubator-zeppelin/zeppelin-web/node_modules/bower/node_modules/bower-config
> requires mout@'>=0.9.0 <1.0.0' but will load
>
> [INFO] Running 'bower --allow-root install' in
> /opt/incubator-zeppelin/zeppelin-web
> [ERROR]
> [ERROR] module.js:340
> [ERROR] throw err;
> [ERROR]   ^
> [ERROR] Error: Cannot find module 'q'
> [ERROR] at Function.Module._resolveFilename (module.js:338:15)
> [ERROR] at Function.Module._load (module.js:280:25)
> [ERROR] at Module.require (module.js:364:17)
> [ERROR] at require (module.js:380:17)
> [ERROR] at Object.
> (/opt/incubator-zeppelin/zeppelin-web/node_modules/bower/bin/bower:6:9)
> [ERROR] at Module._compile (module.js:456:26)
> [ERROR] at Object.Module._extensions..js (module.js:474:10)
> [ERROR] at Module.load (module.js:356:32)
> [ERROR] at Function.Module._load (module.js:312:12)
> [ERROR] at Function.Module.runMain (module.js:497:10)
>
>
>
> [ERROR] Failed to execute goal
> com.github.eirslett:frontend-maven-plugin:0.0.25:bower (bower install) on
> project zeppelin-web: Failed to run task: 'bower --allow-root install'
> failed. (error code 8) -> [Help 1]
>
>
>
> El 17/03/16 a les 13:11, Hyung Sung Shim ha escrit:
>
> Hello.
> Could you share your build command too?
>
> 2016년 3월 17일 목요일, Ruben Boada 님이 작성한 메시지:
>
>> Hi all,
>>
>> I'm trying to build Zeppelin 0.5.6 and the process fails in Zeppelin
>> Interpreter. The error messages are:
>>
>>
>> ERROR org.apache.zeppelin.scheduler.Job:184 - Job failed
>> org.apache.zeppelin.interpreter.InterpreterException:
>> java.lang.NumberFormatException: For input string: "non numeric value"
>> at
>> org.apache.zeppelin.interpreter.remote.mock.MockInterpreterA.interpret(MockInterpreterA.java:68)
>> at
>> org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
>> at
>> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
>> at
>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:331)
>> at org.apache.zeppelin.scheduler.Job.run(Job.java:171)
>> at
>> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
>> at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: java.lang.NumberFormatException: For input string: "non
>> numeric value"
>> at
>> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>> at java.lang.Long.parseLong(Long.java:441)
>> at java.lang.Long.parseLong(Long.java:483)
>> at
>> org.apache.zeppelin.interpreter.remote.mock.MockInterpreterA.interpret(MockInterpreterA.java:65)
>> ... 12 more
>>
>>
>> ERROR
>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterEventPoller:158 -
>> Can't handle event RemoteInterpreterEvent(type:OUTPUT_APPEND,
>> data:{"data":"","noteId":"note","paragraphId":"id"})
>>
>>
>> My environment is CentOS 6.6 with the versions of Maven, Java and Node.js
>> that software specifies:
>>
>> Apache Maven 3.3.3 (7994120775791599e205a5524ec3e0dfe41d4a06;
>> 2015-04-22T13:57:37+02:00)
>> Maven home: /usr/local/apache-maven-3.3.3
>> Java version: 1.7.0_79, vendor: Oracle Corporation
>> Java home: /usr/lib/jvm/jdk1.7.0_79/jre
>> Default locale: ca_ES, platform encoding: UTF-8
>> OS name: "linux", version: "2.6.32-504.el6.x86_64", arch: "amd64",
>> family: "unix"
>>
>> Node.js v0.10.36
>>
>>
>> Anyone can help me with this issue?
>>
>> Thanks in advance
>>
>
>


Re: Zeppelin 0.5.6 build failed

2016-03-19 Thread Hyung Sung Shim
Hello.
Could you share your build command too?

2016년 3월 17일 목요일, Ruben Boada님이 작성한 메시지:

> Hi all,
>
> I'm trying to build Zeppelin 0.5.6 and the process fails in Zeppelin
> Interpreter. The error messages are:
>
>
> ERROR org.apache.zeppelin.scheduler.Job:184 - Job failed
> org.apache.zeppelin.interpreter.InterpreterException:
> java.lang.NumberFormatException: For input string: "non numeric value"
> at
> org.apache.zeppelin.interpreter.remote.mock.MockInterpreterA.interpret(MockInterpreterA.java:68)
> at
> org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
> at
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
> at
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:331)
> at org.apache.zeppelin.scheduler.Job.run(Job.java:171)
> at
> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NumberFormatException: For input string: "non numeric
> value"
> at
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Long.parseLong(Long.java:441)
> at java.lang.Long.parseLong(Long.java:483)
> at
> org.apache.zeppelin.interpreter.remote.mock.MockInterpreterA.interpret(MockInterpreterA.java:65)
> ... 12 more
>
>
> ERROR
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterEventPoller:158 -
> Can't handle event RemoteInterpreterEvent(type:OUTPUT_APPEND,
> data:{"data":"","noteId":"note","paragraphId":"id"})
>
>
> My environment is CentOS 6.6 with the versions of Maven, Java and Node.js
> that software specifies:
>
> Apache Maven 3.3.3 (7994120775791599e205a5524ec3e0dfe41d4a06;
> 2015-04-22T13:57:37+02:00)
> Maven home: /usr/local/apache-maven-3.3.3
> Java version: 1.7.0_79, vendor: Oracle Corporation
> Java home: /usr/lib/jvm/jdk1.7.0_79/jre
> Default locale: ca_ES, platform encoding: UTF-8
> OS name: "linux", version: "2.6.32-504.el6.x86_64", arch: "amd64", family:
> "unix"
>
> Node.js v0.10.36
>
>
> Anyone can help me with this issue?
>
> Thanks in advance
>


Re: zeppelin multi user mode?

2016-02-03 Thread Hyung Sung Shim
Hello yunfeng.

You can also refer to
https://github.com/NFLabs/z-manager/tree/master/multitenancy.

Thanks.

2016-02-04 3:56 GMT+09:00 Christopher Matta :

> I have had luck with a single Zepplin installation and  config directories
> in each user home directory. That way each user gets their own instance and
> will not interfere with each other.
>
> You can start the Zepplin server with a config flag pointing to the config
> directory. Simply copy the config dir that comes with Zepplin to
> ~/.zeppelin and edit the zeppelin-site.xml to change default port for each
> user. Start like this:
> ./zeppelin.sh --config ~/.zeppelin start
>
>
> On Wednesday, February 3, 2016, Lin, Yunfeng  wrote:
>
>> Hi guys,
>>
>>
>>
>> We are planning to use zeppelin for PROD for data scientists. One feature
>> we desperately need is multi user mode.
>>
>>
>>
>> Currently, zeppelin is great for single user use. However, since zeppelin
>> spark context are shared among all users in one zeppelin server, it is not
>> very suitable when there are multiple users on the same zeppelin server
>> since they are going to interfere with each other in one spark context.
>>
>>
>>
>> How do you guys address this need? Thanks.
>>
>>
>>
>
>
> --
> Chris Matta
> cma...@mapr.com
> 215-701-3146
>
>


Re: increasing Spark driver memory using Zeppelin with Spark/YARN

2016-02-03 Thread Hyung Sung Shim
Hello.

I commented below your questions.

1. is the Spark driver executing in the same JVM as the Zeppelin
RemoteInterpreterServer?
*-> I know that is sperated. The interpreter server process is executed by
~/bin/interpreter.sh*

2. how does one correctly size the Spark driver memory (Xmx) in this
setting?
*->  you can refer to
http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Can-not-configure-driver-memory-size-td1513.html
*

3. as an alternative: is yarn-master supported with Zeppelin?
*-> **I know that **zeppelin support only yarn-client mode.*

If I'm wrong, please fix.
Thanks.

2016-02-03 20:31 GMT+09:00 Gerald Loeffler :

> dear Zeppelin users,
>
> we’ve been using Zeppelin (0.5.5) with Spark/YARN successfully for quite
> some time but now
> we need a Spark driver with lots of memory and fail to achieve that using
> Zeppelin (it works without issues using direct spark-submit). Currently
> we’re using yarn-client mode.
>
>1. is the Spark driver executing in the same JVM as the Zeppelin
>RemoteInterpreterServer?
>2. how does one correctly size the Spark driver memory (Xmx) in this
>setting?
>3.
>
>as an alternative: is yarn-master supported with Zeppelin?
>
>
>thank you very much in advance for your help!,
>gerald
>
>
> --
> Gerald Loeffler
> mailto:gerald.loeff...@googlemail.com
> http://www.gerald-loeffler.net
>


Re: Hadoop 2.5 support

2016-02-03 Thread Hyung Sung Shim
Hello.
When i tested it's working "-Phadoop-2.4 -Dhadoop.version=2.5.1".
If we need the hadoop2.5 profile, I will update.
Thanks.

2016-02-03 18:35 GMT+09:00 Akmal Abbasov :

> Hi,
> I'm trying to build Apache Zeppelin, and followed instructions in github.
> But I don't found -Phadoop-2.5 profile.
> I'm running Hadoop 2.5.1.
> Can I use
>
> -Phadoop-2.4 -Dhadoop.version=2.5.1
>
> build options?
>
>
> Thank you.
>


Re: Zeppelin on EMR : how to set Driver and Executor memory

2016-01-30 Thread Hyung Sung Shim
Hello.
The follwing thread may help you.
http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Can-not-configure-driver-memory-size-td1513.html


2016-01-30 22:45 GMT+09:00 shahab :

> Hi,
>
> I am running Zeppelin on Amazon EMR spark and I am keep facing the "out of
> memory" problem while loading large csv file.
> The zeppelin by default has set 512 MB for driver executor and 142 MB for
> two executors.
> I tried to increase them by placing the following configuration params in
> "zeppelin-env.sh", but it had no effect.
>
> --conf driver-memory=6g --conf spark.executor.memory=6g
>
> I do appreciate if you could share your comments and experience on how to
> fix this.
>
> best,
> /Shahab
>
>


Re: Providing third party jar files to spark

2016-01-25 Thread Hyung Sung Shim
Hello.
I also don't work the loadAndDist() method. Maybe it's deprecated.
I'll check and fix the documents.

using spark-shell, you can run your application as following step.
1. remove specified jar configuration in the spark-defaults.conf
2. in the spark-home, bin/spark-shell* --jars "YOUR JAR COMMA SPERATED" *

If you share your application code and environments informations(zeppelin
and spark version you're using, and zeppelin-env.sh etc..), i might help
you.

Thanks.
<https://gitlab.com/search?group_id=&project_id=769187&scope=issues&search=spark-shell#2-function-define>




2016-01-25 19:08 GMT+09:00 Balachandar R.A. :

> Hello,
>
> I tried to use z.loadAndDist() but it says
>
> console>:17: error: value loadAndDist is not a member of
> org.apache.zeppelin.spark.dep.DependencyContext
>
> Any idea here what this method is for?
>
>
> regards
> Bala
>
> On 25 January 2016 at 15:34, Balachandar R.A. 
> wrote:
>
>> Hello,
>>
>> I have run the code in spark-shell successfully but the jar files were
>> all specified in the config files (spark-defaults.conf). However, I will
>> not be able to use z.load() in spark-shell. Isn't? I am sorry but I did not
>> pick up the idea of running using spark-shell. Wail suggestion is to create
>> a fatJar? I will give it a try but still how do i make sure this fatJar is
>> accessible to spark executors? ANyway, I will keep you posted on this
>>
>> regards
>> Bala
>>
>> On 25 January 2016 at 13:39, Hyung Sung Shim  wrote:
>>
>>> Hello.
>>> I think Wail Alkowaileet's comment is possible.
>>> Balachandar, Could you try to run your application with spark-shell?
>>>
>>>
>>> 2016-01-25 15:45 GMT+09:00 Wail Alkowaileet :
>>>
>>>> I used z.load in my case and it seems to be working just fine.
>>>> Can you try spark-shell with your jar file? and see what is the error?
>>>>
>>>> I assume the problem that your application requires third-party jars.
>>>> Therefore, you need to build your app with 'assembly'.
>>>>
>>>>
>>>> On Mon, Jan 25, 2016 at 9:39 AM, Balachandar R.A. <
>>>> balachandar...@gmail.com> wrote:
>>>>
>>>>> Hello Hyung,
>>>>>
>>>>> There is nothig I could make out from error log as it is plain
>>>>> straightforward that classNotFoundException
>>>>>
>>>>> On 25 January 2016 at 11:34, Hyung Sung Shim 
>>>>> wrote:
>>>>>
>>>>>> It's weird..so Could you send the error log for details?
>>>>>>
>>>>>> 2016-01-25 15:00 GMT+09:00 Balachandar R.A. >>>>> >:
>>>>>>
>>>>>>> Hi Hyung,
>>>>>>>
>>>>>>> Thanks for the response. This I have tried but did not work.
>>>>>>>
>>>>>>> regards
>>>>>>> Bala
>>>>>>>
>>>>>>> On 25 January 2016 at 11:27, Hyung Sung Shim 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hello. Balachandar.
>>>>>>>> In case of third one that you've tried, It must be first executed
>>>>>>>> in the notebook.
>>>>>>>> Could you try restart the zeppelin and run first the "%dep
>>>>>>>> z.load()" paragraph?
>>>>>>>>
>>>>>>>>
>>>>>>>> 2016-01-25 14:39 GMT+09:00 Balachandar R.A. <
>>>>>>>> balachandar...@gmail.com>:
>>>>>>>>
>>>>>>>>> Hi
>>>>>>>>>
>>>>>>>>> Any help would be greatly appreciated :-)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> -- Forwarded message --
>>>>>>>>> From: Balachandar R.A. 
>>>>>>>>> Date: 21 January 2016 at 14:11
>>>>>>>>> Subject: Providing third party jar files to spark
>>>>>>>>> To: users@zeppelin.incubator.apache.org
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hello
>>>>>>>>>
>>>>>>>>> My spark based map tasks needs to access third party jar files. I
>>>>>>>>> found below options to submit third party jar files to spark 
>>>>>>>>> interpreter
>>>>>>>>>
>>>>>>>>> 1. export SPARK_SUBMIT_OPTIONS=>>>>>>>> seprated> in conf/zeppelin-env.sh
>>>>>>>>>
>>>>>>>>> 2. include the statement spark.jars  >>>>>>>> separated> in ?conf/spark-defaults.conf
>>>>>>>>>
>>>>>>>>> 3. use the z.load("the location of jar file in the local
>>>>>>>>> filesystem") in zepelin notebook
>>>>>>>>>
>>>>>>>>> I could test the first two and they both works fine. The third one
>>>>>>>>> does not work. Here is the snippet i use
>>>>>>>>>
>>>>>>>>> %dep
>>>>>>>>> z.reset()
>>>>>>>>>
>>>>>>>>> z.load("file:///home/bala/Projects/pocv8.new/mapreduce/build/libs/mapreduce.jar")
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Further, the import of class belongs to the above jar file is
>>>>>>>>> working when I use the statement import com.  in zeppelin 
>>>>>>>>> notebook.
>>>>>>>>> However, I get the class not found exception in the executor for the 
>>>>>>>>> same
>>>>>>>>> class.
>>>>>>>>>
>>>>>>>>> Any clue here would help greatly
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> regards
>>>>>>>>> Bala
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> *Regards,*
>>>> Wail Alkowaileet
>>>>
>>>
>>>
>>
>


Re: Providing third party jar files to spark

2016-01-25 Thread Hyung Sung Shim
Hello.
I think Wail Alkowaileet's comment is possible.
Balachandar, Could you try to run your application with spark-shell?


2016-01-25 15:45 GMT+09:00 Wail Alkowaileet :

> I used z.load in my case and it seems to be working just fine.
> Can you try spark-shell with your jar file? and see what is the error?
>
> I assume the problem that your application requires third-party jars.
> Therefore, you need to build your app with 'assembly'.
>
>
> On Mon, Jan 25, 2016 at 9:39 AM, Balachandar R.A. <
> balachandar...@gmail.com> wrote:
>
>> Hello Hyung,
>>
>> There is nothig I could make out from error log as it is plain
>> straightforward that classNotFoundException
>>
>> On 25 January 2016 at 11:34, Hyung Sung Shim  wrote:
>>
>>> It's weird..so Could you send the error log for details?
>>>
>>> 2016-01-25 15:00 GMT+09:00 Balachandar R.A. :
>>>
>>>> Hi Hyung,
>>>>
>>>> Thanks for the response. This I have tried but did not work.
>>>>
>>>> regards
>>>> Bala
>>>>
>>>> On 25 January 2016 at 11:27, Hyung Sung Shim  wrote:
>>>>
>>>>> Hello. Balachandar.
>>>>> In case of third one that you've tried, It must be first executed in
>>>>> the notebook.
>>>>> Could you try restart the zeppelin and run first the "%dep z.load()"
>>>>> paragraph?
>>>>>
>>>>>
>>>>> 2016-01-25 14:39 GMT+09:00 Balachandar R.A. 
>>>>> :
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> Any help would be greatly appreciated :-)
>>>>>>
>>>>>>
>>>>>> -- Forwarded message --
>>>>>> From: Balachandar R.A. 
>>>>>> Date: 21 January 2016 at 14:11
>>>>>> Subject: Providing third party jar files to spark
>>>>>> To: users@zeppelin.incubator.apache.org
>>>>>>
>>>>>>
>>>>>> Hello
>>>>>>
>>>>>> My spark based map tasks needs to access third party jar files. I
>>>>>> found below options to submit third party jar files to spark interpreter
>>>>>>
>>>>>> 1. export SPARK_SUBMIT_OPTIONS=>>>>> seprated> in conf/zeppelin-env.sh
>>>>>>
>>>>>> 2. include the statement spark.jars  >>>>> separated> in ?conf/spark-defaults.conf
>>>>>>
>>>>>> 3. use the z.load("the location of jar file in the local filesystem")
>>>>>> in zepelin notebook
>>>>>>
>>>>>> I could test the first two and they both works fine. The third one
>>>>>> does not work. Here is the snippet i use
>>>>>>
>>>>>> %dep
>>>>>> z.reset()
>>>>>>
>>>>>> z.load("file:///home/bala/Projects/pocv8.new/mapreduce/build/libs/mapreduce.jar")
>>>>>>
>>>>>>
>>>>>> Further, the import of class belongs to the above jar file is working
>>>>>> when I use the statement import com.  in zeppelin notebook. However, 
>>>>>> I
>>>>>> get the class not found exception in the executor for the same class.
>>>>>>
>>>>>> Any clue here would help greatly
>>>>>>
>>>>>>
>>>>>> regards
>>>>>> Bala
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>
> --
>
> *Regards,*
> Wail Alkowaileet
>


Re: Providing third party jar files to spark

2016-01-24 Thread Hyung Sung Shim
It's weird..so Could you send the error log for details?

2016-01-25 15:00 GMT+09:00 Balachandar R.A. :

> Hi Hyung,
>
> Thanks for the response. This I have tried but did not work.
>
> regards
> Bala
>
> On 25 January 2016 at 11:27, Hyung Sung Shim  wrote:
>
>> Hello. Balachandar.
>> In case of third one that you've tried, It must be first executed in the
>> notebook.
>> Could you try restart the zeppelin and run first the "%dep z.load()"
>> paragraph?
>>
>>
>> 2016-01-25 14:39 GMT+09:00 Balachandar R.A. :
>>
>>> Hi
>>>
>>> Any help would be greatly appreciated :-)
>>>
>>>
>>> -- Forwarded message --
>>> From: Balachandar R.A. 
>>> Date: 21 January 2016 at 14:11
>>> Subject: Providing third party jar files to spark
>>> To: users@zeppelin.incubator.apache.org
>>>
>>>
>>> Hello
>>>
>>> My spark based map tasks needs to access third party jar files. I found
>>> below options to submit third party jar files to spark interpreter
>>>
>>> 1. export SPARK_SUBMIT_OPTIONS=
>>> in conf/zeppelin-env.sh
>>>
>>> 2. include the statement spark.jars  >> separated> in ?conf/spark-defaults.conf
>>>
>>> 3. use the z.load("the location of jar file in the local filesystem") in
>>> zepelin notebook
>>>
>>> I could test the first two and they both works fine. The third one does
>>> not work. Here is the snippet i use
>>>
>>> %dep
>>> z.reset()
>>>
>>> z.load("file:///home/bala/Projects/pocv8.new/mapreduce/build/libs/mapreduce.jar")
>>>
>>>
>>> Further, the import of class belongs to the above jar file is working
>>> when I use the statement import com.  in zeppelin notebook. However, I
>>> get the class not found exception in the executor for the same class.
>>>
>>> Any clue here would help greatly
>>>
>>>
>>> regards
>>> Bala
>>>
>>>
>>>
>>>
>>
>


Re: Providing third party jar files to spark

2016-01-24 Thread Hyung Sung Shim
Hello. Balachandar.
In case of third one that you've tried, It must be first executed in the
notebook.
Could you try restart the zeppelin and run first the "%dep z.load()"
paragraph?


2016-01-25 14:39 GMT+09:00 Balachandar R.A. :

> Hi
>
> Any help would be greatly appreciated :-)
>
>
> -- Forwarded message --
> From: Balachandar R.A. 
> Date: 21 January 2016 at 14:11
> Subject: Providing third party jar files to spark
> To: users@zeppelin.incubator.apache.org
>
>
> Hello
>
> My spark based map tasks needs to access third party jar files. I found
> below options to submit third party jar files to spark interpreter
>
> 1. export SPARK_SUBMIT_OPTIONS= in
> conf/zeppelin-env.sh
>
> 2. include the statement spark.jars   separated> in ?conf/spark-defaults.conf
>
> 3. use the z.load("the location of jar file in the local filesystem") in
> zepelin notebook
>
> I could test the first two and they both works fine. The third one does
> not work. Here is the snippet i use
>
> %dep
> z.reset()
>
> z.load("file:///home/bala/Projects/pocv8.new/mapreduce/build/libs/mapreduce.jar")
>
>
> Further, the import of class belongs to the above jar file is working when
> I use the statement import com.  in zeppelin notebook. However, I get
> the class not found exception in the executor for the same class.
>
> Any clue here would help greatly
>
>
> regards
> Bala
>
>
>
>


Re: Exception when starting Zeppelin server

2016-01-13 Thread Hyung Sung Shim
Hello.
This is kind of jar-hell issue i think.
There is lots ways to solve this issue, but i think cleanning the maven
repo is the simplest way to solve the problem.
Thanks.

2016-01-14 13:54 GMT+09:00 Fazlan Nazeem :

> Hi Hyung,
>
> Cleaning the m2 repo and building it solved the issue. Any idea how the m2
> repo would have affected this issue?
>
> On Wed, Jan 13, 2016 at 7:39 PM, Hyung Sung Shim 
> wrote:
>
>> Hello.
>>
>> I try to build and run as you followed, it's working well.
>> My test environment of zeppelin is CenOS6 and jdk7.
>>
>> What's your environment of zeppelin?
>> Could you try to build and run zeppelin after remove your maven
>> repository(~/.m2/repository/)?
>>
>> If you let me know the result, I could help you.
>>
>> Thanks.
>>
>>
>> 2016-01-13 21:44 GMT+09:00 Fazlan Nazeem :
>>
>>> Hi,
>>>
>>> I cloned the latest repo of Zeppelin and built it using the following
>>> command.
>>>
>>> mvn clean package -Pspark-1.6 -Phadoop-2.4 -Pyarn -Ppyspark -DskipTests
>>>
>>>
>>> The build is successful. But when I start the server using the following
>>> command
>>>
>>> bin/zeppelin.sh
>>>
>>> I am getting the following error
>>>
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in
>>> [jar:file:/home/fazlann/zeppelin-repo/test/incubator-zeppelin/zeppelin-server/target/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/home/fazlann/zeppelin-repo/test/incubator-zeppelin/zeppelin-server/target/lib/zeppelin-interpreter-0.6.0-incubating-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/home/fazlann/zeppelin-repo/test/incubator-zeppelin/zeppelin-zengine/target/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/home/fazlann/zeppelin-repo/test/incubator-zeppelin/zeppelin-zengine/target/lib/zeppelin-interpreter-0.6.0-incubating-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/home/fazlann/zeppelin-repo/test/incubator-zeppelin/zeppelin-interpreter/target/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> explanation.
>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> Exception in thread "main" java.lang.NoClassDefFoundError:
>>> com/sun/jersey/server/impl/application/DeferredResourceConfig
>>> at java.lang.Class.getDeclaredConstructors0(Native Method)
>>> at java.lang.Class.privateGetDeclaredConstructors(Class.java:2595)
>>> at java.lang.Class.getConstructor0(Class.java:2895)
>>> at java.lang.Class.newInstance(Class.java:354)
>>> at
>>> org.eclipse.jetty.servlet.ServletContextHandler$Context.createServlet(ServletContextHandler.java:1075)
>>> at
>>> org.eclipse.jetty.servlet.ServletHolder.newInstance(ServletHolder.java:957)
>>> at
>>> org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:514)
>>> at
>>> org.eclipse.jetty.servlet.ServletHolder.doStart(ServletHolder.java:344)
>>> at
>>> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
>>> at
>>> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:791)
>>> at
>>> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:265)
>>> at
>>> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1242)
>>> at
>>> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:717)
>>> at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:494)
>>> at
>>> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
>>> at
>>> org.eclipse.jetty.server.handler.HandlerCollection.doStart(HandlerCollection.java:229)
>>> at
>>> org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:172)
>>> at
>>> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
>>> at
>>> org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:95)
>>> at org.eclipse.jetty.server.Server.doStart(Server.java:282)
>>> at
>>> org.eclipse.jetty.util.compon

Re: Exception when starting Zeppelin server

2016-01-13 Thread Hyung Sung Shim
Hello.

I try to build and run as you followed, it's working well.
My test environment of zeppelin is CenOS6 and jdk7.

What's your environment of zeppelin?
Could you try to build and run zeppelin after remove your maven
repository(~/.m2/repository/)?

If you let me know the result, I could help you.

Thanks.


2016-01-13 21:44 GMT+09:00 Fazlan Nazeem :

> Hi,
>
> I cloned the latest repo of Zeppelin and built it using the following
> command.
>
> mvn clean package -Pspark-1.6 -Phadoop-2.4 -Pyarn -Ppyspark -DskipTests
>
>
> The build is successful. But when I start the server using the following
> command
>
> bin/zeppelin.sh
>
> I am getting the following error
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/fazlann/zeppelin-repo/test/incubator-zeppelin/zeppelin-server/target/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/fazlann/zeppelin-repo/test/incubator-zeppelin/zeppelin-server/target/lib/zeppelin-interpreter-0.6.0-incubating-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/fazlann/zeppelin-repo/test/incubator-zeppelin/zeppelin-zengine/target/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/fazlann/zeppelin-repo/test/incubator-zeppelin/zeppelin-zengine/target/lib/zeppelin-interpreter-0.6.0-incubating-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/fazlann/zeppelin-repo/test/incubator-zeppelin/zeppelin-interpreter/target/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.NoClassDefFoundError:
> com/sun/jersey/server/impl/application/DeferredResourceConfig
> at java.lang.Class.getDeclaredConstructors0(Native Method)
> at java.lang.Class.privateGetDeclaredConstructors(Class.java:2595)
> at java.lang.Class.getConstructor0(Class.java:2895)
> at java.lang.Class.newInstance(Class.java:354)
> at
> org.eclipse.jetty.servlet.ServletContextHandler$Context.createServlet(ServletContextHandler.java:1075)
> at
> org.eclipse.jetty.servlet.ServletHolder.newInstance(ServletHolder.java:957)
> at
> org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:514)
> at org.eclipse.jetty.servlet.ServletHolder.doStart(ServletHolder.java:344)
> at
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
> at
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:791)
> at
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:265)
> at
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1242)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:717)
> at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:494)
> at
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.doStart(HandlerCollection.java:229)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:172)
> at
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:95)
> at org.eclipse.jetty.server.Server.doStart(Server.java:282)
> at
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
> at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:114)
> Caused by: java.lang.ClassNotFoundException:
> com.sun.jersey.server.impl.application.DeferredResourceConfig
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 22 more
>
> How do I solve this issue?
>
>
> --
> Thanks & Regards,
>
> Fazlan Nazeem
>
>


Re: 2015: year in review

2015-12-30 Thread Hyung Sung Shim
Dear Alexander.
Thank you for your Very good review and happy new year!!


2015년 12월 31일 목요일, Alexander Bezzubov님이 작성한 메시지:

> Dear fellow Zeppelin developers and useres,
>
> the year 2015 is about to end so I wanted to say thank you to everybody
> here - its been a great year for the project indeed!
>
> Wish you a happy new year and want to share a small review of Zeppelin in
> 2015 I did
>
> https://medium.com/@bzz_/apache-zeppelin-incubating-2015-year-in-review-a938d978a309
>
> Hope you enjoy it and please, do not hesitate to share yours.
>
> It was a pleasure to work with you guys, looking forward next year!
>
> --
> Alex
>


Re: HADOOP_CONF_DIR is ignored in zeppelin-env.sh

2015-12-29 Thread Hyung Sung Shim
Hello.
Maybe It's added from 0.5.5 version.

2015-12-30 4:42 GMT+09:00 Jens Rabe :

> Hello,
>
> is this new? In earlier Zeppelin versions I could just set the mentioned
> options and it worked in yarn-client mode.
>
> Am 29.12.2015 um 20:40 schrieb Hyung Sung Shim :
>
> Hello Jens Rabe.
>
> If you want to run zeppelin using spark-submit, you should set variable
> SPARK_HOME to zeppelin-env.sh.
>
> Thanks.
>
>
>
> 2015-12-30 4:18 GMT+09:00 Jens Rabe :
>
>> Hello,
>>
>> I am trying to set up Zeppelin to use Spark on YARN. Spark on YARN itself
>> works, I can use spark-submit and spark-shell. So I set up Zeppelin and my
>> zeppelin-env.sh contains the following:
>>
>> #!/bin/bash
>>
>> export JAVA_HOME=/usr/lib/jvm/java-7-oracle
>> export MASTER=yarn-client # Spark master url. eg.
>> spark://master_addr:7077. Leave empty if you want to use local mode.
>> export ZEPPELIN_JAVA_OPTS="-Dspark.dynamicAllocation.enabled=true
>> -Dspark.shuffle.service.enabled=true"   # Additional jvm options.
>> for example, export ZEPPELIN_JAVA_OPTS="-Dspark.executor.memory=8g
>> -Dspark.cores.max=16"
>> export ZEPPELIN_PORT=10080
>> export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
>>
>> I double-checked that /opt/hadoop/etc/hadoop really contains the correct
>> configuration files, and it does. zeppelin-env-sh is executable, too. But
>> when I start Zeppelin and try to submit something, it tries to connect to a
>> YARN RM at 127.0.0.1. It seems that it ignores HADOOP_CONF_DIR.
>>
>> Is this a bug or am I missing something?
>>
>> - Jens
>
>
>
>


Re: HADOOP_CONF_DIR is ignored in zeppelin-env.sh

2015-12-29 Thread Hyung Sung Shim
Hello Jens Rabe.

If you want to run zeppelin using spark-submit, you should set variable
SPARK_HOME to zeppelin-env.sh.

Thanks.



2015-12-30 4:18 GMT+09:00 Jens Rabe :

> Hello,
>
> I am trying to set up Zeppelin to use Spark on YARN. Spark on YARN itself
> works, I can use spark-submit and spark-shell. So I set up Zeppelin and my
> zeppelin-env.sh contains the following:
>
> #!/bin/bash
>
> export JAVA_HOME=/usr/lib/jvm/java-7-oracle
> export MASTER=yarn-client # Spark master url. eg.
> spark://master_addr:7077. Leave empty if you want to use local mode.
> export ZEPPELIN_JAVA_OPTS="-Dspark.dynamicAllocation.enabled=true
> -Dspark.shuffle.service.enabled=true"   # Additional jvm options.
> for example, export ZEPPELIN_JAVA_OPTS="-Dspark.executor.memory=8g
> -Dspark.cores.max=16"
> export ZEPPELIN_PORT=10080
> export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
>
> I double-checked that /opt/hadoop/etc/hadoop really contains the correct
> configuration files, and it does. zeppelin-env-sh is executable, too. But
> when I start Zeppelin and try to submit something, it tries to connect to a
> YARN RM at 127.0.0.1. It seems that it ignores HADOOP_CONF_DIR.
>
> Is this a bug or am I missing something?
>
> - Jens


Re: zeppelin behind apache reverse proxy

2015-12-18 Thread Hyung Sung Shim
Hello vincent gromakowski.

What version httpd are you using?
Under httpd2.4 does not support WebSocket.
So you can consider https://github.com/disconnect/apache-websocket.

Thanks.

2015-12-18 21:26 GMT-08:00 Girish Reddy :

> I have setup apache reverse proxy on the same host as zeppelin and am able
> to access apache url (localhost:80) to get redirected to zeppelin
> (localhost:9090).  Here's my apache conf file contents:
>
> 
> ProxyPreserveHost On
> ProxyRequests Off
>
>
> ProxyPass / http://localhost:9090/
> ProxyPassReverse / http://localhost:9090/
>
> ServerName localhost
> 
>
>
> What issues are you running into?  Also, my next step is to handle
> authentication in apache.  Were you able to get that working?
>
>
> On Fri, Dec 18, 2015 at 4:54 AM, vincent gromakowski <
> vincent.gromakow...@gmail.com> wrote:
>
>> I am trying to make Zeppelin work behind an Apache reverse proxy that
>> would deal with user authentication but I have issues with the websocket.
>> Please could you provide me some examples of Apache configuration files
>> that would work in reverse proxy ?
>>
>
>


Re: Help needed

2015-12-17 Thread Hyung Sung Shim
Hello.

You can refer to the zeppelin documents here(
https://zeppelin.incubator.apache.org/docs/0.5.5-incubating/).

And if you want to run zeppelin with yarn could you config below like that?
*export MASTER="yarn-client"*
*export SPARK_HOME="YOUR SPARK HOME"*
*export HADOOP_CONF_DIR="YOUR HADOOP CONF DIR" # yarn-site.xml is
located in configuration directory in HADOOP_CONF_DIR.*

Welcome!

2015-12-17 6:29 GMT-08:00 Adam f :

> Hi all,
> I am new to Zeppelin.
> I downloaded the new version- 0.5.5
> I am trying to deploy Zeppelin on Casandra Hadoop with Yarn
> The configuration of the cluster in
> Hadoop 2.6.0-cdh5.4.1
> Spark 1.5.1
>
> can you help me how to config the software?
>
> Thanks you
>


Re: Using Zeppelin with Kerebros enabled cluster running Spark on Yarn

2015-12-03 Thread Hyung Sung Shim
Hello.

IF you set SPARK_HOME in zeppelin-env.sh, zeppelin will run spark with
spark-submit.
So you don't need to configure about kerberos in the zeppelin, just
configure spark configuration(spark-defaults.conf).

Thanks.

2015-11-26 9:06 GMT+09:00 venksvisw :

> Hi,
>
> I am trying to run spark interpreter in Yarn-client mode on Kerberos
> cluster. I am aware that Spark supports passing Keytab and Principal
> information as part of Configuration.
>
> How do I configure the same in Zeppelin spark intrepreter
>
>
> Thanks
> Venkat
>
>
>
> --
> View this message in context:
> http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Using-Zeppelin-with-Kerebros-enabled-cluster-running-Spark-on-Yarn-tp1660.html
> Sent from the Apache Zeppelin Users (incubating) mailing list mailing list
> archive at Nabble.com.
>


Re: Zeppeling isn't connected in CDH 5.4.8

2015-11-25 Thread Hyung Sung Shim
VM zeppelin works? But zeppelin in the spark host does not work right?
It's a strange.
What's the different environments(configurations) between VM zeppelin and
Host zeppelin?

And If you don't use pyspark, you don't need to install the python.


2015-11-25 17:24 GMT+09:00 Guillermo Ortiz :

> I have three node cluster but I installed YARN, SPARK, HDFS and Zeppelin
> in the same node.
>
> It must be something missing, I tried to install Zeppelin in the
> QuickStart VM of Cloudera and it works.
>
> When do I install the binary format, Do I have to install some extra
> package like python,  and so on and it's all in the binary version?
>
> 2015-11-25 2:16 GMT+01:00 Hyung Sung Shim :
>
>> hello.
>> Is there zeppelin and spark and yarn in the same machine?
>> If is not you should open port they(zeppelin,spark,yarn,hdfs) are using.
>>
>> 2015년 11월 25일 수요일, Guillermo Ortiz님이 작성한 메시지:
>>
>> The spark's version is 1.3.1 in this CDH.
>>>
>>> 2015-11-24 17:38 GMT+01:00 Guillermo Ortiz :
>>>
>>>> I'm using zeppelin 0.5.5 (binary installation)
>>>>
>>>> SPARK_HOME is correct /opt/cloudera/parcels/CDH/lib/spark/
>>>> and I checked  /opt/cloudera/parcels/CDH/lib/spark/bin/spark-submit
>>>>
>>>> I don't know about checking connection between zeppelin and spark.
>>>> Could you be more specific? thanks.
>>>>
>>>> 2015-11-24 17:19 GMT+01:00 Hyung Sung Shim :
>>>>
>>>>> Hello.
>>>>>
>>>>> It seems like no problem with your configuration.
>>>>>
>>>>> Please check the below things.
>>>>> - Check connection(port) between zeppelin and spark.
>>>>>
>>>>> - If you set the SPARK_HOME in conf/zeppelin-env.sh, zeppelin use the
>>>>> SPARK_HOME/bin/spark-submit command.
>>>>> So check spark-submit command exist.
>>>>>
>>>>> Let me know your check result and if you still have the problem,
>>>>> tell me the zeppelin version you are using then I will test your
>>>>> problem.
>>>>>
>>>>> Thanks.
>>>>>
>>>>> 2015-11-24 23:57 GMT+09:00 Guillermo Ortiz :
>>>>>
>>>>>> I config spark.home to
>>>>>> /opt/cloudera/parcels/CDH-5.4.8-1.cdh5.4.8.p0.4/lib/spark
>>>>>> I'm not sure about the configuration of spark.yarn.jar
>>>>>>
>>>>>> I only have this configuration in zeppelin-env.sh
>>>>>> export MASTER=yarn-client
>>>>>> export
>>>>>> SPARK_HOME=/opt/cloudera/parcels/CDH-5.4.8-1.cdh5.4.8.p0.4/lib/spark
>>>>>> export HADOOP_CONF_DIR=/etc/hadoop/conf
>>>>>>
>>>>>>
>>>>>> 2015-11-24 15:17 GMT+01:00 Guillermo Ortiz :
>>>>>>
>>>>>>> I'm trying to install Zeppelin in CDH 5.4.8 and I don't get to
>>>>>>> connect. It starts correctly and I don't see any error in the logs but 
>>>>>>> it
>>>>>>> has a "disconnect" state in the web.
>>>>>>>
>>>>>>> I have tried with export MASTER=yarn-client and local[*]
>>>>>>> I have edited the zeppelin-env.sh adding the export
>>>>>>> HADOOP_CONF_DIR=/etc/hadoop/conf
>>>>>>>
>>>>>>> Spark works fine with the spark-shell.
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>


Re: Zeppeling isn't connected in CDH 5.4.8

2015-11-24 Thread Hyung Sung Shim
hello.
Is there zeppelin and spark and yarn in the same machine?
If is not you should open port they(zeppelin,spark,yarn,hdfs) are using.

2015년 11월 25일 수요일, Guillermo Ortiz님이 작성한 메시지:

> The spark's version is 1.3.1 in this CDH.
>
> 2015-11-24 17:38 GMT+01:00 Guillermo Ortiz  >:
>
>> I'm using zeppelin 0.5.5 (binary installation)
>>
>> SPARK_HOME is correct /opt/cloudera/parcels/CDH/lib/spark/
>> and I checked  /opt/cloudera/parcels/CDH/lib/spark/bin/spark-submit
>>
>> I don't know about checking connection between zeppelin and spark. Could
>> you be more specific? thanks.
>>
>> 2015-11-24 17:19 GMT+01:00 Hyung Sung Shim > >:
>>
>>> Hello.
>>>
>>> It seems like no problem with your configuration.
>>>
>>> Please check the below things.
>>> - Check connection(port) between zeppelin and spark.
>>>
>>> - If you set the SPARK_HOME in conf/zeppelin-env.sh, zeppelin use the
>>> SPARK_HOME/bin/spark-submit command.
>>> So check spark-submit command exist.
>>>
>>> Let me know your check result and if you still have the problem,
>>> tell me the zeppelin version you are using then I will test your problem.
>>>
>>> Thanks.
>>>
>>> 2015-11-24 23:57 GMT+09:00 Guillermo Ortiz >> >:
>>>
>>>> I config spark.home to
>>>> /opt/cloudera/parcels/CDH-5.4.8-1.cdh5.4.8.p0.4/lib/spark
>>>> I'm not sure about the configuration of spark.yarn.jar
>>>>
>>>> I only have this configuration in zeppelin-env.sh
>>>> export MASTER=yarn-client
>>>> export
>>>> SPARK_HOME=/opt/cloudera/parcels/CDH-5.4.8-1.cdh5.4.8.p0.4/lib/spark
>>>> export HADOOP_CONF_DIR=/etc/hadoop/conf
>>>>
>>>>
>>>> 2015-11-24 15:17 GMT+01:00 Guillermo Ortiz >>> >:
>>>>
>>>>> I'm trying to install Zeppelin in CDH 5.4.8 and I don't get to
>>>>> connect. It starts correctly and I don't see any error in the logs but it
>>>>> has a "disconnect" state in the web.
>>>>>
>>>>> I have tried with export MASTER=yarn-client and local[*]
>>>>> I have edited the zeppelin-env.sh adding the export
>>>>> HADOOP_CONF_DIR=/etc/hadoop/conf
>>>>>
>>>>> Spark works fine with the spark-shell.
>>>>>
>>>>>
>>>>
>>>
>>
>


Re: Zeppeling isn't connected in CDH 5.4.8

2015-11-24 Thread Hyung Sung Shim
Hello.

It seems like no problem with your configuration.

Please check the below things.
- Check connection(port) between zeppelin and spark.

- If you set the SPARK_HOME in conf/zeppelin-env.sh, zeppelin use the
SPARK_HOME/bin/spark-submit command.
So check spark-submit command exist.

Let me know your check result and if you still have the problem,
tell me the zeppelin version you are using then I will test your problem.

Thanks.

2015-11-24 23:57 GMT+09:00 Guillermo Ortiz :

> I config spark.home to
> /opt/cloudera/parcels/CDH-5.4.8-1.cdh5.4.8.p0.4/lib/spark
> I'm not sure about the configuration of spark.yarn.jar
>
> I only have this configuration in zeppelin-env.sh
> export MASTER=yarn-client
> export SPARK_HOME=/opt/cloudera/parcels/CDH-5.4.8-1.cdh5.4.8.p0.4/lib/spark
> export HADOOP_CONF_DIR=/etc/hadoop/conf
>
>
> 2015-11-24 15:17 GMT+01:00 Guillermo Ortiz :
>
>> I'm trying to install Zeppelin in CDH 5.4.8 and I don't get to connect.
>> It starts correctly and I don't see any error in the logs but it has a
>> "disconnect" state in the web.
>>
>> I have tried with export MASTER=yarn-client and local[*]
>> I have edited the zeppelin-env.sh adding the export
>> HADOOP_CONF_DIR=/etc/hadoop/conf
>>
>> Spark works fine with the spark-shell.
>>
>>
>


Re: com.fasterxml.jackson.databind.JsonMappingException

2015-11-21 Thread Hyung Sung Shim
Hello.

If you use cdh for hadoop can you try build order like 'mvn clean package
-Pvendor-repo  -DskipTests -Pspark-1.5 -Dspark.version=1.5.2
-Dhadoop.version=2.6.0-mr1-cdh5.4.8' ?

I hope this is help.

2015-11-22 6:55 GMT+09:00 Timur Shenkao :

> Hi!
>
> I use CentOS 6.7 + Spark 1.5.2 Standalone + Cloudera Hadoop 5.4.8 on the
> same cluster. I can't use Mesos or Spark on YARN.
> I decided to try Zeppelin. I tried to use binaries,  to build from sources
> with different parameters.
> At last, I built version 0.6.0 so:
> mvn clean package  –DskipTests  -Pspark-1.5 -Phadoop-2.6 -Pyarn -Ppyspark
> -Pbuild-distr
>
> But constantly get the error:
>
> com.fasterxml.jackson.databind.JsonMappingException: Could not find
> creator property with name 'id' (in class
> org.apache.spark.rdd.RDDOperationScope) at [Source:
> {"id":"0","name":"parallelize"}; line: 1, column: 1] at
> com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
> at
> com.fasterxml.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:843)
> at
> com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.addBeanProps(BeanDeserializerFactory.java:533)
> at
> com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.buildBeanDeserializer(BeanDeserializerFactory.java:220)
> at
> com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.createBeanDeserializer(BeanDeserializerFactory.java:143)
> at
> com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer2(DeserializerCache.java:409)
> at
> com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer(DeserializerCache.java:358)
> at
> com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCache2(DeserializerCache.java:265)
> at
> com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCacheValueDeserializer(DeserializerCache.java:245)
> at
> com.fasterxml.jackson.databind.deser.DeserializerCache.findValueDeserializer(DeserializerCache.java:143)
> at
> com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:439)
> at
> com.fasterxml.jackson.databind.ObjectMapper._findRootDeserializer(ObjectMapper.java:3666)
> at
> com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3558)
> at
> com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2578)
> at
> org.apache.spark.rdd.RDDOperationScope$.fromJson(RDDOperationScope.scala:82)
> at org.apache.spark.rdd.RDD$$anonfun$34.apply(RDD.scala:1603) at
> org.apache.spark.rdd.RDD$$anonfun$34.apply(RDD.scala:1603) at
> scala.Option.map(Option.scala:145) at
> org.apache.spark.rdd.RDD.(RDD.scala:1603) at
> org.apache.spark.rdd.ParallelCollectionRDD.(ParallelCollectionRDD.scala:85)
> at
> org.apache.spark.SparkContext$$anonfun$parallelize$1.apply(SparkContext.scala:725)
> at
> org.apache.spark.SparkContext$$anonfun$parallelize$1.apply(SparkContext.scala:723)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
> at org.apache.spark.SparkContext.withScope(SparkContext.scala:709) at
> org.apache.spark.SparkContext.parallelize(SparkContext.scala:723) at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:33) at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:38) at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:40) at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:42) at
> $iwC$$iwC$$iwC$$iwC$$i
> ...
> and so on.
>
> My code is:
> %spark
> import org.apache.spark.sql._
> val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
>
> case class Contact(name: String, phone: String)
> case class Person(name: String, age: Int, contacts: Seq[Contact])
>
> val records = (1 to 100).map { i =>;
> Person(s"name_$i", i, (0 to 1).map { m => Contact(s"contact_$m",
> s"phone_$m") })
> }
>
> Then, it fails after the following line:
> sc.parallelize(records).toDF().write.format("orc").save("people")
>
> In spark-shell, this code works perfectly, so problem is in Zeppelin.
>
> By the way, your own tutorial gives the same error:
>
> // load bank data
> val bankText = sc.parallelize(
> IOUtils.toString(
>  new URL("
> https://s3.amazonaws.com/apache-zeppelin/tutorial/bank/bank.csv";),
>  Charset.forName("utf8")).split("\n"))
>
> case class Bank(age: Integer, job: String, marital: String, education:
> String, balance: Integer)
>
> val bank = bankText.map(s => s.split(";")).filter(s => s(0) !=
> "\"age\"").map(
> s => Bank(s(0).toInt,
> s(1).replaceAll("\"", ""),
> s(2).replaceAll("\"", ""),
> s(3).replaceAll("\"", ""),
> s(5).replaceAll("\"", "").toInt
> )
> ).toDF()
> bank.registerTempTable("bank")
>
>
> How to fix it? Change some dependency in pom.xml?
>



-- 

[image: 본문 이미지 1]

(주)엔에프랩  |  콘텐츠서비스팀 |  팀장 심형성

*E. hsshim*@nflabs.com 

*T.* 02-3458-9650 *M. *010-4282-1230

*A.* 서울특별시

Re: [ANNOUNCE] Apache Zeppelin 0.5.5-incubating released

2015-11-19 Thread Hyung Sung Shim
Great!
Congratulations.

2015-11-19 22:33 GMT+09:00 moon soo Lee :

> The Apache Zeppelin (incubating) community is pleased to announce the
> availability of the 0.5.5-incubating release. The community puts
> significant effort into improving Apache Zeppelin since the last release,
> focusing on having new backend support, improvements on stability and
> simplifying the configuration. More than 60 contributors provided new
> features, improvements and verifying release. More than 90 issues has been
> resolved.
>
> We encourage download the latest release from
> http://zeppelin.incubator.apache.org/download.html
>
> Release note is available at
> http://zeppelin.incubator.apache.org/releases/zeppelin-release-0.5.5-incubating.html
>
> We welcome your help and feedback. For more information on the project and
> how to get involved, visit our website at
> http://zeppelin.incubator.apache.org/
>
> Thanks to all users and contributors who have helped to improve
> Apache Zeppelin.
>
> Regards,
> The Apache Zeppelin community
>
>
> Disclaimer:
> Apache Zeppelin is an effort undergoing incubation at the Apache Software
> Foundation (ASF), sponsored by the Apache Incubator PMC.
> Incubation is required of all newly accepted projects until a further
> review indicates that the infrastructure, communications, and decision
> making process have stabilized in a manner consistent with other
> successful ASF projects.
> While incubation status is not necessarily a reflection of the
> completeness or stability of the code, it does indicate that the
> project has yet to be fully endorsed by the ASF.
>



-- 

[image: 본문 이미지 1]

(주)엔에프랩  |  콘텐츠서비스팀 |  팀장 심형성

*E. hsshim*@nflabs.com 

*T.* 02-3458-9650 *M. *010-4282-1230

*A.* 서울특별시 강남구 논현동 216-2 하림빌딩 2층 NFLABS