Zeppelin with Separate Spark Connection
Hi All, I’ve just set up Zeppelin, and I’ve also set up my own Spark with connection to Alluxio. I installed Zeppelin using the binary. When I use Zeppelin, it seems to be using some internal Spark, not the one that I set up. What configurations should I set in order to make the notebooks and Spark jobs execute on my own Spark? I edited zeppelin-env.sh and added SPARK_HOME, but that caused anything I tried to run in my notebook just shoot back “ERROR” with no output. Any help would be much appreciated! Thanks!! Best, Keren The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Re: Zeppelin with Separate Spark Connection
On Wed, Nov 9, 2016 at 1:34 PM, Tseytlin, Keren < keren.tseyt...@capitalone.com> wrote: > Hi All, > > > > I’ve just set up Zeppelin, and I’ve also set up my own Spark with > connection to Alluxio. I installed Zeppelin using the binary. When I use > Zeppelin, it seems to be using some internal Spark, not the one that I set > up. What configurations should I set in order to make the notebooks and > Spark jobs execute on my own Spark? > > > > I edited zeppelin-env.sh and added SPARK_HOME, but that caused anything I > tried to run in my notebook just shoot back “ERROR” with no output. > > > > Any help would be much appreciated! Thanks!! > > > > Best, > > Keren > > Try updating the following in zeppelin-env.sh export MASTER="spark://spark-02.softlayer.com:7077" export SPARK_HOME=/opt/spark-1.6.2-bin-hadoop2.6 And then on the ui, update the spark interpreter configuration to have master url properly configured. -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: Zeppelin with Separate Spark Connection
Hi Keren, Have you tried to set 'master' property in 'interpreter' GUI menu? Basically, set SPARK_HOME env variable and 'master' property would enough for basic configuration. Please take a look http://zeppelin.apache.org/docs/0.6.2/interpreter/spark.html#2-set-master-in-interpreter-menu . You're tring Zeppelin 0.6.2, right? Thanks, moon On Wed, Nov 9, 2016 at 1:35 PM Tseytlin, Keren < keren.tseyt...@capitalone.com> wrote: > Hi All, > > > > I’ve just set up Zeppelin, and I’ve also set up my own Spark with > connection to Alluxio. I installed Zeppelin using the binary. When I use > Zeppelin, it seems to be using some internal Spark, not the one that I set > up. What configurations should I set in order to make the notebooks and > Spark jobs execute on my own Spark? > > > > I edited zeppelin-env.sh and added SPARK_HOME, but that caused anything I > tried to run in my notebook just shoot back “ERROR” with no output. > > > > Any help would be much appreciated! Thanks!! > > > > Best, > > Keren > > -- > > The information contained in this e-mail is confidential and/or > proprietary to Capital One and/or its affiliates and may only be used > solely in performance of work or services for Capital One. The information > transmitted herewith is intended only for use by the individual or entity > to which it is addressed. If the reader of this message is not the intended > recipient, you are hereby notified that any review, retransmission, > dissemination, distribution, copying or other use of, or taking of any > action in reliance upon this information is strictly prohibited. If you > have received this communication in error, please contact the sender and > delete the material from your computer. >