Re: Importaing Hbase data

2016-03-25 Thread Silvio Fiorito
There’s also this, which seems more current: 
https://github.com/apache/hbase/tree/master/hbase-spark

I haven’t used it, but I know Ted Malaska and others from Cloudera have worked 
heavily on it.

From: Felix Cheung >
Reply-To: 
"users@zeppelin.incubator.apache.org"
 
>
Date: Friday, March 25, 2016 at 12:01 PM
To: 
"users@zeppelin.incubator.apache.org"
 
>,
 
"users@zeppelin.incubator.apache.org"
 
>
Subject: Re: Importaing Hbase data

You should be able to access that from Spark SQL through a package like 
http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase

This package seems like have not been updated for a while though.



On Tue, Mar 22, 2016 at 11:06 AM -0700, "Kumiko Yada" 
> wrote:


Hello,



Is there a way to importing Hbase data to the Zeppelin notebook using the Spark 
SQL?



Thanks

Kumiko


RE: Spark interpreter idle timeout

2016-03-04 Thread Silvio Fiorito
Hi Dylan,

I see. I’ve only used dynamic resource allocation on YARN, in fact for 
scenarios such as this, but that Mesos issue you described sounds like a bug to 
me? Is this on 1.6?

Thanks,
Silvio



From: Dylan Meissner<mailto:dylan.meiss...@gettyimages.com>
Sent: Friday, March 4, 2016 12:55 PM
To: Silvio Fiorito<mailto:silvio.fior...@granturing.com>; 
users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>
Subject: Re: Spark interpreter idle timeout


Thank you Silvio.



I have not actually tried dynamic allocation (it's not trivial to use an 
external shuffle service). I will do more research there.



What we are actually experiencing is Mesos offer starvation caused by many 
long-lived Spark frameworks hoarding "Mesos resource offers" (not resources). 
This is why I am considering closing/restarting the Spark interpreter rather 
than using dynamic allocation or fine-grained mode to keep resource usage at a 
minimum.



For example, looking through the current API documents, it seems possible for a 
"watchdog" process to determine if a note is running, when it has last ran, and 
then restart the interpreter.



From: Silvio Fiorito <silvio.fior...@granturing.com>
Sent: Friday, March 4, 2016 9:05 AM
To: users@zeppelin.incubator.apache.org; Dylan Meissner
Subject: Re: Spark interpreter idle timeout

If you’re using Mesos as your Spark cluster manager, then you can use dynamic 
resource allocation. So as your users are running notebooks the Spark executors 
will scale up and down as needed, per the thresholds you define. And when the 
user is idle, Spark will automatically release resources.

Please see the docs here, for more info:
http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation
http://spark.apache.org/docs/latest/running-on-mesos.html#dynamic-resource-allocation-with-mesos


Thanks,
Silvio

From: Dylan Meissner 
<dylan.meiss...@gettyimages.com<mailto:dylan.meiss...@gettyimages.com>>
Reply-To: 
"users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>"
 
<users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>>
Date: Friday, March 4, 2016 at 11:52 AM
To: 
"users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>"
 
<users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>>
Subject: Spark interpreter idle timeout

Greetings,

We run multiple Zeppelin's per user in a Mesos cluster. The Mesos Marathon 
framework hosts the Zeppelin servers, and running a note causes a Spark 
framework to start a Spark context to distribute the workload described in the 
notes. This works well for us.

However, when notebooks are left unattended, we'd like the Spark interpreter to 
shut down. This will free resources that can go to other Mesos frameworks. Is 
there a way to set an "idle timeout" today, and if not, how do you imagine it 
could be accomplished in either Zeppelin, or Spark?

Thanks,
Dylan Meissner
www.gettyimages.com<http://www.gettyimages.com>



Re: Spark interpreter idle timeout

2016-03-04 Thread Silvio Fiorito
If you’re using Mesos as your Spark cluster manager, then you can use dynamic 
resource allocation. So as your users are running notebooks the Spark executors 
will scale up and down as needed, per the thresholds you define. And when the 
user is idle, Spark will automatically release resources.

Please see the docs here, for more info:
http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation
http://spark.apache.org/docs/latest/running-on-mesos.html#dynamic-resource-allocation-with-mesos


Thanks,
Silvio

From: Dylan Meissner 
>
Reply-To: 
"users@zeppelin.incubator.apache.org"
 
>
Date: Friday, March 4, 2016 at 11:52 AM
To: 
"users@zeppelin.incubator.apache.org"
 
>
Subject: Spark interpreter idle timeout

Greetings,

We run multiple Zeppelin's per user in a Mesos cluster. The Mesos Marathon 
framework hosts the Zeppelin servers, and running a note causes a Spark 
framework to start a Spark context to distribute the workload described in the 
notes. This works well for us.

However, when notebooks are left unattended, we'd like the Spark interpreter to 
shut down. This will free resources that can go to other Mesos frameworks. Is 
there a way to set an "idle timeout" today, and if not, how do you imagine it 
could be accomplished in either Zeppelin, or Spark?

Thanks,
Dylan Meissner
www.gettyimages.com


Re: Building and running on Mac OS 11.11

2016-03-03 Thread Silvio Fiorito
You have npm installed right? If so, can you try running `npm install` from the 
zeppelin-web dir and see what errors you’re getting?

From: Jose Celaya <jcel...@slb.com<mailto:jcel...@slb.com>>
Reply-To: 
"users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>"
 
<users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>>
Date: Thursday, March 3, 2016 at 1:48 PM
To: 
"users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>"
 
<users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>>
Subject: Re: Building and running on Mac OS 11.11

thank you again for all looking into this, greatly appreciated. This is what I 
just tried, copy of beginning and end. If there is a better way to share this 
please let me know.
regards,
José

Joses-Mac-Pro:incubator-zeppelin-master JCelaya$ java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
Joses-Mac-Pro:incubator-zeppelin-master JCelaya$ mvn --version
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 
2015-11-10T08:41:47-08:00)
Maven home: /Users/JCelaya/apache-maven-3.3.9
Java version: 1.7.0_79, vendor: Oracle Corporation
Java home: /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.11.3", arch: "x86_64", family: "mac"
Joses-Mac-Pro:incubator-zeppelin-master JCelaya$



Joses-Mac-Pro:incubator-zeppelin-master JCelaya$ mvn clean package -DskipTests
[INFO] Scanning for projects...
[WARNING]
[WARNING] Some problems were encountered while building the effective model for 
org.apache.zeppelin:zeppelin-zengine:jar:0.6.0-incubating-SNAPSHOT
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
be unique: junit:junit:jar -> duplicate declaration of version (?) @ line 204, 
column 17
[WARNING]
[WARNING] Some problems were encountered while building the effective model for 
org.apache.zeppelin:zeppelin-spark:jar:0.6.0-incubating-SNAPSHOT
[WARNING] 'build.plugins.plugin.version' for org.scala-tools:maven-scala-plugin 
is missing. @ line 378, column 15
[WARNING]
[WARNING] Some problems were encountered while building the effective model for 
org.apache.zeppelin:zeppelin-flink:jar:0.6.0-incubating-SNAPSHOT
[WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but found 
duplicate declaration of plugin 
org.apache.maven.plugins:maven-dependency-plugin @ line 349, column 15
[WARNING]
[WARNING] Some problems were encountered while building the effective model for 
org.apache.zeppelin:zeppelin-cassandra:jar:0.6.0-incubating-SNAPSHOT
[WARNING] 'build.plugins.plugin.version' for org.scala-tools:maven-scala-plugin 
is missing. @ line 173, column 21
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten 
the stability of your build.
[WARNING]
[WARNING] For this reason, future Maven versions might no longer support 
building such malformed projects.
[WARNING]
[INFO] 
[INFO] Reactor Build Order:
[INFO]
[INFO] Zeppelin


[INFO] Zeppelin: Tachyon interpreter .. SUCCESS [  0.763 s]
[INFO] Zeppelin: web Application .. FAILURE [  8.116 s]
[INFO] Zeppelin: Server ... SKIPPED
[INFO] Zeppelin: Packaging distribution ... SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:59 min
[INFO] Finished at: 2016-03-03T10:46:05-08:00
[INFO] Final Memory: 288M/4565M
[INFO] 
[ERROR] Failed to execute goal 
com.github.eirslett:frontend-maven-plugin:0.0.25:npm (npm install) on project 
zeppelin-web: Failed to run task: 'npm install --color=false' failed. (error 
code 1) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :zeppelin-web
Joses-Mac-Pro:incubator-zeppelin-master JCelaya$














On Mar 3, 2016, at 10:38 AM, Silvio Fiorito 
<silvio.fior...@granturing.com<mailto:silvio.fior...@granturing.com>> wrote:

Can you give us an idea on the errors you getting? I build regularly on both

Re: Building and running on Mac OS 11.11

2016-03-03 Thread Silvio Fiorito
Can you give us an idea on the errors you getting? I build regularly on both 
Mac and Windows without a problem but your troubles could be due to a number of 
factors.

Also, you should not need sudo for building unless there are some incorrect 
file permissions.

From: Jose Celaya >
Reply-To: 
"users@zeppelin.incubator.apache.org"
 
>
Date: Thursday, March 3, 2016 at 12:11 PM
To: 
"users@zeppelin.incubator.apache.org"
 
>
Subject: Building and running on Mac OS 11.11

Hi,

Are there any suggestions or links to tutorials of building and installing for 
Mac OS 11.11.

I have been trying for two days with no success on building. I have Java 7 
installed and working, latest Maven installed and working, installed a few side 
packages as well but I am still having issues building barebones and also with 
spark 1.6 and hadoop.

I must say I am not an expert of building open source software so I would 
prefer a binary that I can install. We are quite excited about this tool and 
would like to give it a try.

In any case, any help will be greatly appreciated.

cheers

José





José R. Celaya, Ph.D.

Senior Data Scientist
Schlumberger Software Technology Innovation Center
Menlo Park, CA

jcel...@slb.com




RE: problem with start H2OContent

2016-03-02 Thread Silvio Fiorito
PooledObjectFactory.java:60)
at 
org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
at 
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
at 
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:139)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.init(RemoteInterpreter.java:129)
... 12 more
Caused by: org.apache.thrift.transport.TTransportException: 
java.net.ConnectException: Connection refused
at org.apache.thrift.transport.TSocket.open(TSocket.java:187)
at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
... 19 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at org.apache.thrift.transport.TSocket.open(TSocket.java:182)
... 20 more"

On Mon, Feb 29, 2016 at 7:35 PM, Silvio Fiorito 
<silvio.fior...@granturing.com<mailto:silvio.fior...@granturing.com>> wrote:

Can you try running it from just a Spark shell to confirm it works that way (no 
other conflict)?

bin/spark-shell --master local[*] --packages 
ai.h2o:sparkling-water-core_2.10:1.5.10

Also, are you able to run the Spark interpreter without the h2o package?

Thanks,
Silvio

From: Aleksandr Modestov<mailto:aleksandrmodes...@gmail.com>
Sent: Monday, February 29, 2016 11:30 AM
To: 
users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>
Subject: Re: problem with start H2OContent

I use Spark 1.5
The problem with the external Spark with internal Spark I can not launch 
h2oContent :)
The error is:

"ERROR [2016-02-29 19:28:16,609] ({pool-1-thread-3} 
NotebookServer.java[afterStatusChange]:766) - Error
org.apache.zeppelin.interpreter.InterpreterException: 
org.apache.zeppelin.interpreter.InterpreterException: 
org.apache.thrift.transport.TTransportException: java.net.ConnectException: 
Connection refused
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:268)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormType(LazyOpenInterpreter.java:104)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:198)
at org.apache.zeppelin.scheduler.Job.run(Job.java:169)
at 
org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:322)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.zeppelin.interpreter.InterpreterException: 
org.apache.thrift.transport.TTransportException: java.net.ConnectException: 
Connection refused
at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:53)
at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37)
at 
org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60)
at 
org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
at 
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
at 
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:139)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:266)
... 11 more
Caused by: org.apache.thrift.transport.TTransportException: 
java.net.ConnectException: Connection refused
at org.apache.thrift.transport.TSocket.open(TSocket.java:187)
at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
... 18 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl

RE: problem with start H2OContent

2016-02-29 Thread Silvio Fiorito

Can you try running it from just a Spark shell to confirm it works that way (no 
other conflict)?

bin/spark-shell --master local[*] --packages 
ai.h2o:sparkling-water-core_2.10:1.5.10

Also, are you able to run the Spark interpreter without the h2o package?

Thanks,
Silvio

From: Aleksandr Modestov<mailto:aleksandrmodes...@gmail.com>
Sent: Monday, February 29, 2016 11:30 AM
To: 
users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>
Subject: Re: problem with start H2OContent

I use Spark 1.5
The problem with the external Spark with internal Spark I can not launch 
h2oContent :)
The error is:

"ERROR [2016-02-29 19:28:16,609] ({pool-1-thread-3} 
NotebookServer.java[afterStatusChange]:766) - Error
org.apache.zeppelin.interpreter.InterpreterException: 
org.apache.zeppelin.interpreter.InterpreterException: 
org.apache.thrift.transport.TTransportException: java.net.ConnectException: 
Connection refused
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:268)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormType(LazyOpenInterpreter.java:104)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:198)
at org.apache.zeppelin.scheduler.Job.run(Job.java:169)
at 
org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:322)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.zeppelin.interpreter.InterpreterException: 
org.apache.thrift.transport.TTransportException: java.net.ConnectException: 
Connection refused
at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:53)
at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37)
at 
org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60)
at 
org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
at 
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
at 
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:139)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:266)
... 11 more
Caused by: org.apache.thrift.transport.TTransportException: 
java.net.ConnectException: Connection refused
at org.apache.thrift.transport.TSocket.open(TSocket.java:187)
at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
... 18 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at org.apache.thrift.transport.TSocket.open(TSocket.java:182)
... 19 more"

On Mon, Feb 29, 2016 at 7:07 PM, Silvio Fiorito 
<silvio.fior...@granturing.com<mailto:silvio.fior...@granturing.com>> wrote:
In your zeppelin-env you set SPARK_HOME and SPARK_SUBMIT_OPTIONS ? Anything in 
the logs? Looks like the interpreter failed to start.

Also, Sparkling Water currently supports up to 1.5 only, last I checked.

Thanks,
Silvio



From: Aleksandr Modestov<mailto:aleksandrmodes...@gmail.com>
Sent: Monday, February 29, 2016 10:43 AM
To: 
users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>
Subject: Re: problem with start H2OContent

When I use external Spark I get exeption:

java.net.ConnectException: Connection refused at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) 
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.thrift.transport.TSocket.open(TSocket.java:182) at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
 at 
org.apache.zeppelin.interpreter.rem

RE: problem with start H2OContent

2016-02-29 Thread Silvio Fiorito
In your zeppelin-env you set SPARK_HOME and SPARK_SUBMIT_OPTIONS ? Anything in 
the logs? Looks like the interpreter failed to start.

Also, Sparkling Water currently supports up to 1.5 only, last I checked.

Thanks,
Silvio



From: Aleksandr Modestov<mailto:aleksandrmodes...@gmail.com>
Sent: Monday, February 29, 2016 10:43 AM
To: 
users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>
Subject: Re: problem with start H2OContent

When I use external Spark I get exeption:

java.net.ConnectException: Connection refused at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) 
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.thrift.transport.TSocket.open(TSocket.java:182) at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
 at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37)
 at 
org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60)
 at 
org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
 at 
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
 at 
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
 at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:139)
 at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.init(RemoteInterpreter.java:129)
 at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:257)
 at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormType(LazyOpenInterpreter.java:104)
 at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:198) at 
org.apache.zeppelin.scheduler.Job.run(Job.java:169) at 
org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:322)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at 
java.util.concurrent.FutureTask.run(FutureTask.java:262) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)


On Mon, Feb 29, 2016 at 5:43 PM, Silvio Fiorito 
<silvio.fior...@granturing.com<mailto:silvio.fior...@granturing.com>> wrote:
It doesn’t seem to be loading transitive dependencies properly. When I was 
helping someone else set this up recently, I had to use 
SPARK_SUBMIT_OPTIONS=“--packages ai.h2o:sparkling-water-core_2.10:1.5.10” with 
an external Spark installation (vs using bundled Spark in Zeppelin).

From: Aleksandr Modestov 
<aleksandrmodes...@gmail.com<mailto:aleksandrmodes...@gmail.com>>
Reply-To: 
"users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>"
 
<users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>>
Date: Monday, February 29, 2016 at 9:30 AM
To: 
"users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>"
 
<users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>>
Subject: Re: problem with start H2OContent

In a conf-file I wrote package but it doesn't work and I use  'z.load("...")'.

On Mon, Feb 29, 2016 at 5:25 PM, vincent gromakowski 
<vincent.gromakow...@gmail.com<mailto:vincent.gromakow...@gmail.com>> wrote:
your H2O jar is not loaded in spark classpath. Maybe retry to load with 
z.load("...") or add spark.jars parameter in spark interpreter configuration

2016-02-29 15:23 GMT+01:00 Aleksandr Modestov 
<aleksandrmodes...@gmail.com<mailto:aleksandrmodes...@gmail.com>>:
I did import "import org.apache.spark.h2o._"
What do you mean " it's probably a problem with your classpath."?


On Mon, Feb 29, 2016 at 5:19 PM, vincent gromakowski 
<vincent.gromakow...@gmail.com<mailto:vincent.gromakow...@gmail.com>> wrote:
Don't forget to do the import. If done it's probably a problem with your 
classpath...

2016-02-29 15:03 GMT+01:00 Aleksandr Modestov 
<aleksandrmodes...@gmail.com<mailto:aleksandrmodes...@gmail.com>>:
Hello all,
There is a problem when I start to initialize H2OContent.
Does anybody know the answer?

java.lang.NoClassDefFoundError: water/api/HandlerFactory at 
org.apac

Re: problem with start H2OContent

2016-02-29 Thread Silvio Fiorito
It doesn’t seem to be loading transitive dependencies properly. When I was 
helping someone else set this up recently, I had to use 
SPARK_SUBMIT_OPTIONS=“--packages ai.h2o:sparkling-water-core_2.10:1.5.10” with 
an external Spark installation (vs using bundled Spark in Zeppelin).

From: Aleksandr Modestov 
>
Reply-To: 
"users@zeppelin.incubator.apache.org"
 
>
Date: Monday, February 29, 2016 at 9:30 AM
To: 
"users@zeppelin.incubator.apache.org"
 
>
Subject: Re: problem with start H2OContent

In a conf-file I wrote package but it doesn't work and I use  'z.load("...")'.

On Mon, Feb 29, 2016 at 5:25 PM, vincent gromakowski 
> wrote:
your H2O jar is not loaded in spark classpath. Maybe retry to load with 
z.load("...") or add spark.jars parameter in spark interpreter configuration

2016-02-29 15:23 GMT+01:00 Aleksandr Modestov 
>:
I did import "import org.apache.spark.h2o._"
What do you mean " it's probably a problem with your classpath."?


On Mon, Feb 29, 2016 at 5:19 PM, vincent gromakowski 
> wrote:
Don't forget to do the import. If done it's probably a problem with your 
classpath...

2016-02-29 15:03 GMT+01:00 Aleksandr Modestov 
>:
Hello all,
There is a problem when I start to initialize H2OContent.
Does anybody know the answer?

java.lang.NoClassDefFoundError: water/api/HandlerFactory at 
org.apache.spark.h2o.H2OContext.start(H2OContext.scala:107) at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:65)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:70)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:72)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:74)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:76)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:78)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:80)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:82)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:84)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:86)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:88)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:90)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:92)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:94)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:96)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:98)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:100)
 at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:102) at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:104) at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:106) at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:108) at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:110) at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:112) at 
$iwC$$iwC$$iwC$$iwC$$iwC.(:114) at 
$iwC$$iwC$$iwC$$iwC.(:116) at 
$iwC$$iwC$$iwC.(:118) at $iwC$$iwC.(:120) at 
$iwC.(:122) at (:124) at .(:128) 
at .() at .(:7) at .() at 
$print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606) at 
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) at 
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346) at 
org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) at 
org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) at 

Re: Sparkling Water (H2O) Interpeter

2016-02-25 Thread Silvio Fiorito
Hi,

We actually just had a talk at our DC Spark Meetup from H2O last night. Since 
Sparkling Water is available as a Spark Package, you could very quickly define 
a new interpreter that includes the necessary package, as described in the JIRA 
comments. That way you don’t need to wait for a new release. That’s how they 
demoed it running on Databricks Cloud in fact.

Let me know if you’d like some help setting that up.

Thanks,
Silvio

From: Sourigna Phetsarath 
>
Reply-To: 
"users@zeppelin.incubator.apache.org"
 
>
Date: Thursday, February 25, 2016 at 11:10 AM
To: 
"users@zeppelin.incubator.apache.org"
 
>
Subject: Sparkling Water (H2O) Interpeter

All:

I saw this ticket: https://issues.apache.org/jira/browse/ZEPPELIN-582  is 
anyone currently working on it for the next release?

Thanks for any information that you can provide.

--

Gna Phetsarath
System Architect // AOL Platforms // Data Services // Applied Research Chapter
770 Broadway, 5th Floor, New York, NY 10003
o: 212.402.4871 // m: 917.373.7363
vvmr: 8890237 aim: sphetsarath20 t: @sourigna

[http://www.aolplatforms.com/sites/all/themes/aolplatforms/images/header-AOL_platforms_logo.png]


RE: Unable to start Zeppelin

2016-02-20 Thread Silvio Fiorito
Hi Ankur,

Glad you were able to get up and running with my patch. You should set the 
ZEPPELIN_NOTEBOOK_DIR value in zeppelin-env.cmd rather than zeppelin.cmd 
though. That way you can keep your settings across releases without worrying 
about them getting overwritten.

I’ve been working on a few ideas for how to handle the URI issue, but right now 
the easiest is to just explicitly define the absolute URI.

Please let me know if you have any issues with the Windows scripts. I tested 
Hive, Spark, and Flink (very briefly) and they all seemed to work well. Hadoop 
and Hive have a few quirks on Windows. Mainly you need to ensure you have 
winutils.exe in your HADOOP_HOME\bin and you’ll need to fix the HDFS 
permissions on the Hive temp dir, which defaults to /tmp/hive. It’ll be on the 
root of whichever drive your Zeppelin working dir is, so for instance I run 
Zeppelin from E:\src\incubator-zeppelin, so I have E:\tmp\hive by default. You 
need to run “hdfs dfs -chmod 777 /tmp/hive” from your E: drive. This way Hive 
and Spark SQLContext will work properly.

Thanks,
Silvio

From: Ankur Jain
Sent: Saturday, February 20, 2016 11:36 PM
To: 
users@zeppelin.incubator.apache.org
Subject: RE: Unable to start Zeppelin

Thanks Alex,

For more users below are things I followed to resolve issue at my end…

I am using Windows 7 to run Zeppelin:

I also used patch (cmd files) provided by @granturing 
https://github.com/apache/incubator-zeppelin/pull/734#issuecomment-186653407

I had to manually configure ZEPPELIN_NOTEBOOK_DIR in zeppelin.cmd as below….

set ZEPPELIN_NOTEBOOK_DIR=file:///F:/Zeppelin-new/incubator-zeppelin/notebook
"%ZEPPELIN_RUNNER%" %JAVA_OPTS% -cp %ZEPPELIN_CLASSPATH_OVERRIDES%;%CLASSPATH% 
%ZEPPELIN_SERVER% "%*"



This was required for couple of reasons…
http://stackoverflow.com/questions/7998574/apache-commons-vfs-cannot-resolvefile
https://commons.apache.org/proper/commons-vfs/filesystems.html

[cid:image004.jpg@01D16C8F.89C112B0]

[cid:image005.jpg@01D16C8F.89C112B0]

Regards,
Ankur

From: Alexander Bezzubov [mailto:b...@apache.org]
Sent: 20 February 2016 05:54 PM
To: users@zeppelin.incubator.apache.org
Subject: Re: Unable to start Zeppelin

Hi Ankur,

Zeppelin has pluggable Notebook storage 
implementation,
 configured i.e though `conf/zeppelin-env.sh` using 'export 
ZEPPELIN_NOTEBOOK_STORAGE="org.apache.zeppelin.notebook.repo.VFSNotebookRepo"' 
(this is a default, so you should see somthing liek `Empty 
ZEPPELIN_NOTEBOOK_STORAGE conf parameter, using default ` in the logs).

In your case it looks like this somehow is not configured properly, so could 
you please check those files and see if that is the case? You can always try 
adding a default one.

Hope this helps!

--
Alex

On Sat, Feb 20, 2016 at 7:20 PM, Ankur Jain 
> wrote:
Hello Team,

I am trying to start Zeppelin, but getting below error….
Can you guide me how to resolve it?

at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at 
org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:113)
Caused by: java.io.IOException: Requested storage index 0 isn't initialized, 
repository count is 0
at 
org.apache.zeppelin.notebook.repo.NotebookRepoSync.getRepo(NotebookRepoSync.java:228)
at 
org.apache.zeppelin.notebook.repo.NotebookRepoSync.list(NotebookRepoSync.java:118)
at 
org.apache.zeppelin.notebook.Notebook.loadAllNotes(Notebook.java:391)
at 
org.apache.zeppelin.notebook.Notebook.(Notebook.java:108)
at 
org.apache.zeppelin.server.ZeppelinServer.(ZeppelinServer.java:87)



Thanks
Ankur
Information transmitted by this e-mail is proprietary to YASH Technologies and/ 
or its Customers and is intended for use only by the individual or entity to 
which it is addressed, and may contain information that is privileged, 
confidential or exempt from disclosure under applicable law. If you are not the 
intended recipient or it appears that this mail has been forwarded to you 
without proper authority, you are notified that any use or dissemination of 
this information in any manner is strictly prohibited. In such cases, please 
notify us immediately at i...@yash.com and delete this 
mail from your records.

Information transmitted by this e-mail is proprietary to YASH Technologies and/ 
or its Customers and is intended for use only by the individual or entity to 
which it is addressed, and may contain information that is privileged, 
confidential or exempt from disclosure under applicable law. If you are not the 
intended recipient or it appears that this mail has been forwarded to you 
without proper authority, you are notified that any use or dissemination of 
this 

RE: Does zeppelin work on windows?

2016-02-20 Thread Silvio Fiorito
Hello everyone,

I've submitted a PR here https://github.com/apache/incubator-zeppelin/pull/734 
for initial Windows support. There are still a few things to handle, detailed 
in the PR.

I've tested these scripts on Windows 10 desktop and Azure Web App.

Thanks,
Silvio

From: Ankur Jain<mailto:ankur.j...@yash.com>
Sent: Saturday, February 20, 2016 4:21 AM
To: 
users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>
Subject: RE: Does zeppelin work on windows?

I am also trying to setup zeppelin on Windows 7.
Please Share your bat files, that would be helpful and I can also verify them...

Thanks
Ankur

From: moon soo Lee [mailto:m...@apache.org]
Sent: 20 February 2016 02:26 PM
To: users@zeppelin.incubator.apache.org
Subject: Re: Does zeppelin work on windows?

+1 for Windows support.
Looking forward windows batch files.
I can test on my Windows XP (though, I'm not sure other people still use it)

Best,
moon

On Thu, Feb 11, 2016 at 5:23 AM Silvio Fiorito 
<silvio.fior...@granturing.com<mailto:silvio.fior...@granturing.com>> wrote:
I actually have some Windows batch files that I plan on submitting as a PR soon.

I've tested them so far on Win10 and an Azure Web App.

If anyone would like to help testing on more environments please let me know.




On 2/11/16, 6:26 AM, "Kamalkanta" 
<aagust...@gmail.com<mailto:aagust...@gmail.com>> wrote:

>Hi puneet,
>
>Have a look on it, Sparklet - Apache Spark & Zeppelin Installer for Windows
>64 bit system.
>http://mund-consulting.com/products/sparklet.aspx
>
>
>Regards,
>-Kamal
>-http://mund-consulting.com/
>
>
>
>--
>View this message in context: 
>http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Does-zeppelin-work-on-windows-tp273p2218.html
>Sent from the Apache Zeppelin Users (incubating) mailing list mailing list 
>archive at Nabble.com.
Information transmitted by this e-mail is proprietary to YASH Technologies and/ 
or its Customers and is intended for use only by the individual or entity to 
which it is addressed, and may contain information that is privileged, 
confidential or exempt from disclosure under applicable law. If you are not the 
intended recipient or it appears that this mail has been forwarded to you 
without proper authority, you are notified that any use or dissemination of 
this information in any manner is strictly prohibited. In such cases, please 
notify us immediately at i...@yash.com and delete this mail from your records.


Re: Cannot get Zeppelin to work

2016-02-17 Thread Silvio Fiorito
Hi,

If you want Windows scripts to run Zeppelin please take a look at this branch 
on my fork. I have a couple more things to do before I’m ready to submit as a 
PR, but these scripts have been tested on my Windows 10 machine as well Azure 
App Service.

https://github.com/granturing/incubator-zeppelin/commit/9e404823cf4c56a948cd49fb896c1de6775a70c1

You will need to set your ZEPPELIN_NOTEBOOK_DIR to something similar to 
file://c:/notebook in zeppelin-env.cmd for now since the code is assuming a URI 
which won’t work with default Windows paths.

Thanks,
Silvio

From: Rohit Jain >
Reply-To: 
"users@zeppelin.incubator.apache.org"
 
>
Date: Wednesday, February 17, 2016 at 6:30 PM
To: 
"users@zeppelin.incubator.apache.org"
 
>
Subject: RE: Cannot get Zeppelin to work

I figure when you mention task manager you were referring to Windows.

· When I do a ./zeppelin-daemon.sh start I get:

o   Zeppelin start [OK]

o   And Zeppelin process dies [FAILED]

· I do a ./zeppelin-daemon.sh status

o   and I get Zeppelin running but process is dead [FAILED]

· And this is with the 0.5.6 binary downloaded to Windows 10 running 
under Cygwin

· I look at Task Manager and I get nothing that looks like Zeppelin

· Nothing on localhost:8080

· I have not checked if Bitdefender has a firewall that is preventing 
Zeppelin to run.  I will try that if you think that the Zeppelin process dies 
because of firewall issues.

On the centos:

· I do the same as above on either the 0.5.6 binary or the one I built 
from the source I downloaded.

· The start says [OK]

· Status says Zeppelin is running [OK]

· Every indication is that it is listening on that port 8080

· Firewall is turned off

· But still get 404 on localhost:8080

· Have tried other ports before with the same result

· Also, a jstack against the Zeppelin process cannot attach to it.  
Almost indicating that it hangs right away.

Rohit

From: Felix Cheung 
[mailto:felixcheun...@hotmail.com]
Sent: Wednesday, February 17, 2016 4:36 PM
To: 
users@zeppelin.incubator.apache.org;
 users@zeppelin.incubator.apache.org
Subject: Re: Cannot get Zeppelin to work

Could you check task manager that it is running?
Also could this be blocked by firewall rules?



On Wed, Feb 17, 2016 at 2:30 PM -0800, "Rohit Jain" 
> wrote:

Hi folks,



I tried various ways to get Zeppelin to work and don’t seem to be having any 
luck.



I tried these on my Windows 10 PC:

· Tried to build Zeppelin and got the bower error 01 as documented at 
http://madhukaudantha.blogspot.kr/2015/04/building-zeppelin-in-windows-8.html 
but could not figure out how to install bower to fix that problem since I kept 
getting that same error if I tried to install it using mvn

· Downloaded the  0.5.6 binary and did a start.  Got zeppelin start 
[OK] but it was followed by zeppelin process died [FAILED].  Nothing in the log 
to indicate where the problems may be.



I tried these on our internal centos dev machine:

· Build Zeppelin from source

· Downloaded 0.5.6 binary

· Both of the above seem to indicate that they have started but when I 
go to localhost:8080 or whatever localhost I assign to ZEPPELIN_PORT I get a 
404 error.  I do a status on zeppelin and it says it is not running.  Nothing 
in the logs.



Obviously a number of you folks have gotten this to work.  I am a relative 
neophyte.  But it seems it should work or tell me why it can’t.



Rohit Jain




Anyone at Spark Summit East this week?

2016-02-14 Thread Silvio Fiorito
Hello fellow Zeppelin users,

Just wondering if anyone will be at Spark Summit east this week in New York?

If so, would love to meet and talk about Zeppelin, Spark, etc. If you’re 
especially interested in Windows or Azure I'd be happy to talk and demo what 
I’ve been working on that will hopefully eventually be directly integrated into 
Zeppelin.

I’ll be there Tuesday assisting with the training and then of course for the 
conference on Wednesday and Thursday. Unfortunately I have to leave Thursday 
immediately following the closing talk.

Thanks,
Silvio



Re: Does zeppelin work on windows?

2016-02-11 Thread Silvio Fiorito
I actually have some Windows batch files that I plan on submitting as a PR soon.

I’ve tested them so far on Win10 and an Azure Web App.

If anyone would like to help testing on more environments please let me know.




On 2/11/16, 6:26 AM, "Kamalkanta"  wrote:

>Hi puneet,
>
>Have a look on it, Sparklet - Apache Spark & Zeppelin Installer for Windows
>64 bit system.
>http://mund-consulting.com/products/sparklet.aspx
>
>
>Regards,
>-Kamal
>-http://mund-consulting.com/
>
>
>
>--
>View this message in context: 
>http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Does-zeppelin-work-on-windows-tp273p2218.html
>Sent from the Apache Zeppelin Users (incubating) mailing list mailing list 
>archive at Nabble.com.


Re: Real time chart in Zeppelin?

2015-11-11 Thread Silvio Fiorito
Hi Roger,

Here’s an example I made using Angular and Leaflet updating based on a Spark 
Streaming app

https://gist.github.com/granturing/a09aed4a302a7367be92

Same concept could be used for any other client side JavaScript (d3, etc.)

Thanks,
Silvio

From: Roger Hui >
Reply-To: 
"users@zeppelin.incubator.apache.org"
 
>
Date: Tuesday, November 10, 2015 at 11:14 PM
To: 
"users@zeppelin.incubator.apache.org"
 
>
Subject: Real time chart in Zeppelin?

Hi,

Is there any example that I can display real time chart over Zeppelin? Also, on 
the rendering side, how could I import my own d3 scripts or svg graphs to 
reflect the data changes in real time?

Thanks,
Roger


Need official release packages

2015-10-21 Thread Silvio Fiorito
For the Zeppelin maintainers,

Just a heads up that I think the lack of official binary releases on the 
Zeppelin website and GitHub gives the wrong impression. I personally always 
build from source and package my own distribution, but for visibility and 
newbies it would be best to have regular release packages. Also if there’s a 
way to provide packages for current and previous version of Spark that’d 
probably be good as well.

I’m happy to help in any way. Could we just use Jenkins to punch out packages 
automatically?

Thanks,
Silvio


Re: Need official release packages

2015-10-21 Thread Silvio Fiorito
OK, yeah I’m not entirely familiar with the official Apache release process. I 
also meant Travis-CI (vs Jenkins) since that’s what you already use for CI. I 
wonder if it’s ok to at least provide snapshot releases based on the master 
branch?

I think recurring releases are a good ideas as well as it helps time-box it and 
make it more predictable (like I know to expect a Spark release every quarter).

From: moon soo Lee
Reply-To: 
"users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>"
Date: Wednesday, October 21, 2015 at 11:41 PM
To: 
"users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>"
Subject: Re: Need official release packages

Appreciate for the opinions.

About automatic packaging using Jenkins, in my understanding, it is not how 
Apache project make release. Correct me if i'm wrong.

Zeppelin community didn't made a lot of releases.
Vinay and many people suggested from online/offline to change release policy to 
date based (eg. a release every 3 months). And i think it'll help Zeppelin make 
more releases.

Let me spend some time on https://issues.apache.org/jira/browse/ZEPPELIN-311. 
Any help on this issue is very appreciated.

Thanks,
moon


On Thu, Oct 22, 2015 at 1:51 AM Steven Kirtzic 
<steven.kirtzic.f...@statefarm.com<mailto:steven.kirtzic.f...@statefarm.com>> 
wrote:
I agree. For where we are trying to use Zeppelin, our company has MANY 
different security features which prevent us from fully building from source, 
so currently the only version of Zeppelin that we can use are the binaries with 
a particular version of Spark. We’re really hoping that the next version of 
Zeppelin is available that way as well, as it would be much easier going that 
route than trying to get around our security precautions (and safer ☺). Anyhow, 
just my two cents. Thanks,

-Steven

From: Silvio Fiorito 
[mailto:silvio.fior...@granturing.com<mailto:silvio.fior...@granturing.com>]
Sent: Wednesday, October 21, 2015 11:44 AM
To: 
users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>
Subject: Need official release packages

For the Zeppelin maintainers,

Just a heads up that I think the lack of official binary releases on the 
Zeppelin website and GitHub gives the wrong impression. I personally always 
build from source and package my own distribution, but for visibility and 
newbies it would be best to have regular release packages. Also if there’s a 
way to provide packages for current and previous version of Spark that’d 
probably be good as well.

I’m happy to help in any way. Could we just use Jenkins to punch out packages 
automatically?

Thanks,
Silvio


Re: Reactive Angular charts sample

2015-10-15 Thread Silvio Fiorito
Hi Chad,

So with that Angular code I posted you can then create an Angular variable 
called “locations” and set it from a separate Scala paragraph, like so:

case class Loc(desc: String, lat: Double, lon: Double)
val locations = Array(Loc(“Test”, 24.4, 49.8))
z.angularBind(“locations”, locations)

The locations value would be an array of a case class. In my example, I’m 
expecting the fields “desc”, “lat”, and “lon” (see lines 23-24) but you can use 
whatever you want. Obviously you would set that based on some Spark or other 
query.

Let me know if that works or not!

Thanks,
Silvio

From: Chad Roberts
Reply-To: 
"users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>"
Date: Thursday, October 15, 2015 at 4:31 PM
To: 
"users@zeppelin.incubator.apache.org<mailto:users@zeppelin.incubator.apache.org>"
Subject: Re: Reactive Angular charts sample

Silvio, thanks for the examples.

I'm a bit [totally] new when it comes to working with angular.  I'm running 
your map example and I get the map to display just fine, but how would I add 
markers to the map from a separate paragraph?

Thanks,
Chad

On Tue, Oct 6, 2015 at 10:48 AM, Silvio Fiorito 
<silvio.fior...@granturing.com<mailto:silvio.fior...@granturing.com>> wrote:
Hey everyone,

Great I’m glad this was helpful! It definitely opens up a lot of possibilities 
to extend the power of Zeppelin UI.

I’m also working on a few things to make Streaming a bit nicer, like a button 
to start/stop the streaming context and displaying realtime stats of the 
streaming job in the UI as well.

I’ll keep the list updated and post more examples as I get them.

Thanks,
Silvio



Re: Reactive Angular charts sample

2015-10-06 Thread Silvio Fiorito
Hey everyone,

Great I’m glad this was helpful! It definitely opens up a lot of possibilities 
to extend the power of Zeppelin UI.

I’m also working on a few things to make Streaming a bit nicer, like a button 
to start/stop the streaming context and displaying realtime stats of the 
streaming job in the UI as well.

I’ll keep the list updated and post more examples as I get them.

Thanks,
Silvio


DC Spark meetup talk

2015-09-15 Thread Silvio Fiorito
Hey everyone,

Just wanted to point out that if you’re in the Washington DC area I’ll be 
giving a talk on exploratory analysis using Spark SQL and Zeppelin, next week.

http://www.meetup.com/Washington-DC-Area-Spark-Interactive/events/224998133/

There are still a few spots open so if you’re interested sign up!

Thanks,
Silvio


RE: File name too long error in Spark paragraphs

2015-08-23 Thread Silvio Fiorito
I've seen this recently as well. Seems to be an issue with the Scala REPL after 
running and rerunning notebooks with a lot of code.

Only solution I found was too restart the interpreter.

Even Databricks cloud seems to have this issue: 
https://forums.databricks.com/questions/427/why-do-i-see-this-error-when-i-run-my-notebook-jav.html


Thanks,
Silvio

From: Randy Gelhausenmailto:rgel...@gmail.com
Sent: ‎8/‎22/‎2015 6:33 PM
To: 
users@zeppelin.incubator.apache.orgmailto:users@zeppelin.incubator.apache.org
Subject: File name too long error in Spark paragraphs

Hi All,

Anyone see something similar to this:

%spark
import org.apache.spark.sql._
import org.apache.phoenix.spark._

val input = /user/root/crimes/atlanta

val df = sqlContext.read.format(com.databricks.spark.csv).option(header, 
true).option(DROPMALFORMED, true).load(input)
val columns = df.columns.map(x = x.toUpperCase +  varchar,\n)
column

The result is an error:
File name too long

I tried commenting out various lines, and then ALL lines, but everything (even 
in new paragraphs) passed to the interpreter results in File name too long.

Am I doing something silly?

Thanks,
-Randy


Re: Executing spark code in Zeppelin

2015-07-29 Thread Silvio Fiorito
Hi Stefan,

Looks like this question was just covered here 
http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-users/201507.mbox/browser

Thanks,
Silvio

From: Stefan Panayotov
Reply-To: 
users@zeppelin.incubator.apache.orgmailto:users@zeppelin.incubator.apache.org
Date: Wednesday, July 29, 2015 at 9:59 AM
To: 
users@zeppelin.incubator.apache.orgmailto:users@zeppelin.incubator.apache.org
Subject: FW: Executing spark code in Zeppelin

Hi,

I have sent the question below to the Spark user group, but got an advice to 
send it to the Zeppelin user group.
Please see below and let me know if you have stumbled on this issue, and 
possible resolutions.
The limit is 3400 characters. Once I go above that Zeppelin paragraph stops 
reacting.

Thanks,

Stefan Panayotov, PhD
Home: 610-355-0919
Cell: 610-517-5586
email: spanayo...@msn.commailto:spanayo...@msn.com
spanayo...@outlook.commailto:spanayo...@outlook.com
spanayo...@comcast.netmailto:spanayo...@comcast.net




From: Stefan Panayotovmailto:spanayo...@msn.com
Sent: ‎7/‎29/‎2015 8:20 AM
To: user-subscr...@spark.apache.orgmailto:user-subscr...@spark.apache.org
Subject: Executing spark code in Zeppelin

Hi,
I faced a problem with running long code snippets in Zeppelin paragraph. If the 
code passes certain limit (I still have to check exactly the limit) clicking on 
the run button, or pressing Shift-Enter does nothing. This effect can be 
demonstrated even with adding comments to the code.
Has anybody stumbled on such a problem?

Stefan Panayotov
Sent from my Windows Phone


RE: Zeppelin showing disconnected after successful build and daemon start

2015-05-08 Thread Silvio Fiorito
Did you publish both port 8080 and 8081?


From: Jose Rivera-Rubiomailto:jose.riv...@internavenue.com
Sent: ‎5/‎8/‎2015 6:16 AM
To: 
users@zeppelin.incubator.apache.orgmailto:users@zeppelin.incubator.apache.org
Subject: Zeppelin showing disconnected after successful build and daemon start

Hi, I'm running Zeppelin on a Docker container, so I my problems shouldn't be 
related to port issues. However, I'm also seeing
disconnected on the top right corner and I get this message when I run it:

OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was 
removed in 8.0
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/zeppelin/zeppelin-server/target/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/zeppelin/zeppelin-zengine/target/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/zeppelin/zeppelin-interpreter/target/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
May 08, 2015 10:01:26 AM com.sun.jersey.api.core.PackagesResourceConfig init
INFO: Scanning for root resource and provider classes in the packages:
  org.apache.zeppelin.rest
  com.wordnik.swagger.jersey.listing
May 08, 2015 10:01:26 AM com.sun.jersey.api.core.ScanningResourceConfig 
logClasses
INFO: Root resource classes found:
  class org.apache.zeppelin.rest.ZeppelinRestApi
  class com.wordnik.swagger.jersey.listing.ApiListingResourceJSON
  class org.apache.zeppelin.rest.NotebookRestApi
  class org.apache.zeppelin.rest.InterpreterRestApi
May 08, 2015 10:01:26 AM com.sun.jersey.api.core.ScanningResourceConfig 
logClasses
INFO: Provider classes found:
  class com.wordnik.swagger.jersey.listing.JerseyResourceListingProvider
  class com.wordnik.swagger.jersey.listing.JerseyApiDeclarationProvider
May 08, 2015 10:01:26 AM 
com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
May 08, 2015 10:01:26 AM 
com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Adding the following classes declared in 
META-INF/services/jersey-server-components to the resource configuration:
  class org.atmosphere.jersey.AtmosphereResourceConfigurator
May 08, 2015 10:01:27 AM com.sun.jersey.spi.inject.Errors processErrorMessages
WARNING: The following warnings have been detected with resource and/or 
provider classes:
  WARNING: A HTTP GET method, public javax.ws.rs.core.Response 
org.apache.zeppelin.rest.InterpreterRestApi.listInterpreter(java.lang.String), 
should not consume any entity.


Any ideas? It's driving me mad :|



Re: Multi-user approach

2015-03-27 Thread Silvio Fiorito
I haven’t tried this myself yet but something I’ve been thinking as well. Will 
the nginx reverse proxy support web sockets as well?

Ideally we’d have isolated SparkContexts so users aren’t trampling over each 
other. Honestly I think it’d be good to have the option of starting a new 
SparkContext per notebook as well or using the model Databricks has where you 
“attach” a notebook to a cluster.

From: RJ Nowling
Reply-To: 
users@zeppelin.incubator.apache.orgmailto:users@zeppelin.incubator.apache.org
Date: Friday, March 27, 2015 at 12:19 PM
To: 
users@zeppelin.incubator.apache.orgmailto:users@zeppelin.incubator.apache.org
Subject: Multi-user approach

Hi all,

I'm looking into ways to support multiple users with Zeppelin.  I want to 
provide isolation between users.

I'm considering the following approach:
* Run Zeppelin under each user's account with its own set of ports
* Use nginx as a reverse proxy for providing authentication

Has anyone done anything similar?  Any better alternatives?

Thanks!
RJ