any one who have the same question with me? or this is not a question?
2015-07-14 11:47 GMT+08:00 linxi zeng :
> hi, Moon:
>I notice that the getScheduler function in the SparkInterpreter.java
> return a FIFOScheduler which makes the spark interpreter run spark job one
> by one. It's not a go
Hi,
You can use dependency loader described here
http://zeppelin.incubator.apache.org/docs/interpreter/spark.html#dependencyloading
to
load spark package.
Here's example of how you can do.
https://www.zeppelinhub.com/#/notebook/zeppelin/2AR3N4YT3
Hope this helps.
Best,
moon
On Tue, Jul 14, 201
Hi,
How it is possible to do this in Zeppelin ?
/bin/pyspark --packages com.databricks:spark-csv_2.11:1.0.3
Regards,
hi,
our mesos / varnish production environment allows only one port to be
exposed to the users's browser.
Is there any possibility to use the same port for both http and websockets?
best regards
Ralf
moon:
hi ! sorry to bother you again. I spend all day trying to make a solution
about the error:
ERROR [2015-07-14 19:17:13,391]
({sparkDriver-akka.actor.default-dispatcher-4}
Slf4jLogger.scala[apply$mcV$sp]:66) - Uncaught fatal error from thread
[sparkDriver-akka.remote.default-remote-dispatcher-