Re: FW: Issue with Zeppelin setup on Datastax-Spark
Hello DuyHai, Original problem reported by Arpan Saha is related to Datastax. I am using Spark + Zeppelin. Below is the configuration. - Spark 2.0.2 - Zeppelin: 0.6.2 - Java 1.8.0_111 - R: 3.3.1 Thanks, Abul On Wed, Nov 16, 2016 at 3:44 PM, DuyHai Doan <doanduy...@gmail.com> wrote: > I recommend downloading my pre-built Zeppelin for Datastax. Shared folder > link: https://drive.google.com/folderview?id=0B6wR2aj4Cb6wQ01aR3ItR0xUNms > > On Wed, Nov 16, 2016 at 11:13 AM, DuyHai Doan <doanduy...@gmail.com> > wrote: > >> Ok I understand why you have issue. >> >> You are using Spark 2.0.2 and latest Datastax 5.0.3 is still using Spark >> version 1.6.X >> >> >> >> On Wed, Nov 16, 2016 at 10:23 AM, Abul Basar <aba...@einext.com> wrote: >> >>> I am facing a similar issue while using Spark R. >>> >>> My environment: >>> >>>- Spark 2.0.2 >>>- Zeppelin: 0.6.2 >>>- Java 1.8.0_111 >>>- R: 3.3.1 >>> >>> >>> SPARK_HOME is set. I am trying to run a simple paragraph. >>> >>> %r >>> print("hello ...") >>> >>> I get the following exception >>> >>> *Interpreter Log* >>> > # getZeppelinR >>> >>> > .zeppelinR = SparkR:::callJStatic("org.apache.zeppelin.spark.ZeppelinR", >>> "getZeppelinR", hashCode) >>> >>> at org.apache.zeppelin.spark.ZeppelinR.waitForRScriptInitialize >>> d(ZeppelinR.java:295) >>> at org.apache.zeppelin.spark.ZeppelinR.request(ZeppelinR.java:235) >>> at org.apache.zeppelin.spark.ZeppelinR.eval(ZeppelinR.java:183) >>> at org.apache.zeppelin.spark.ZeppelinR.open(ZeppelinR.java:172) >>> at org.apache.zeppelin.spark.SparkRInterpreter.open(SparkRInter >>> preter.java:85) >>> at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(Laz >>> yOpenInterpreter.java:69) >>> at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getProgr >>> ess(LazyOpenInterpreter.java:110) >>> at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServ >>> er.getProgress(RemoteInterpreterServer.java:404) >>> at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterServ >>> ice$Processor$getProgress.getResult(RemoteInterpreterService.java:1509) >>> at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterServ >>> ice$Processor$getProgress.getResult(RemoteInterpreterService.java:1494) >>> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) >>> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) >>> at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run >>> (TThreadPoolServer.java:285) >>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool >>> Executor.java:1142) >>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo >>> lExecutor.java:617) >>> at java.lang.Thread.run(Thread.java:745) >>> >>> >>> *Zeppelin Log* >>> ERROR [2016-11-16 14:42:05,664] ({Thread-377} >>> JobProgressPoller.java[run]:54) - Can not get or update progress >>> org.apache.zeppelin.interpreter.InterpreterException: >>> org.apache.thrift.transport.TTransportException >>> at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.get >>> Progress(RemoteInterpreter.java:373) >>> at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getProgr >>> ess(LazyOpenInterpreter.java:111) >>> at org.apache.zeppelin.notebook.Paragraph.progress(Paragraph.java:237) >>> at org.apache.zeppelin.scheduler.JobProgressPoller.run(JobProgr >>> essPoller.java:51) >>> Caused by: org.apache.thrift.transport.TTransportException >>> at org.apache.thrift.transport.TIOStreamTransport.read(TIOStrea >>> mTransport.java:132) >>> at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) >>> at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryPr >>> otocol.java:429) >>> at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryPr >>> otocol.java:318) >>> at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin( >>> TBinaryProtocol.java:219) >>> at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) >>> at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterServ >>> ice$Client.recv_getProgress(RemoteInterpreterService.java:296) >>> at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterServ >>> ice$Client.getProgress(RemoteInt
Re: Setting default interpreter at notebook level
Hi Jongyoul Thanks for the information. -AB On Thu, Jul 28, 2016 at 8:43 PM, Jongyoul Lee <jongy...@gmail.com> wrote: > Hi Abul, > > Concerning "defaultInterpreter", it is a feature of current master and > doesn't work at 0.6.0. Sorry for wrong information. And for now, we don't > have any specific plan for supporting different default interpreter with > same interpreter setting. Thus, in your case, %r tags is a proper way for > now. I also don't think it's the best. I hope Zeppelin support this > feature, too. > > Regards, > Jongyoul > > On Thu, Jul 28, 2016 at 10:54 PM, Abul Basar <aba...@einext.com> wrote: > >> Hello Jongyoul, >> >> I could not find the file interpreter-setting.json, but I found a file >> conf/interpreter.json. I added a property "default":"true" >> for interpreter "org.apache.zeppelin.spark.SparkRInterpreter". I restarted >> zeppelin demon service. But, R did not work as default interpreter. >> >> Then I changed conf/zeppelin-site.xml to alter the sequence of >> the interpreter, putting org.apache.zeppelin.spark.SparkRInterpreter as >> first interpreter. This worked. But this solution is not practical. If want >> to work on a R notebook and a scala notebook in parallel, this mechanism >> require me to touch conf/zeppelin-site.xml file or keep on using %r tags >> on each cell. >> >> Thanks! >> >> -AB >> >> On Mon, Jul 25, 2016 at 3:30 PM, Jongyoul Lee <jongy...@gmail.com> wrote: >> >>> Hello Abul, >>> >>> Changing orders within a group dynamically is not supported yet. You can >>> change it by making interpreter-setting.json in a resources directory In a >>> interpreter-setting.json, you can find a property named `default`. If it's >>> true, That will be a default interpreter in a group. If you don't want to >>> compile Zeppelin again, copy interpreter-setting.json into >>> interpreter/spark/ and open it and change it. It will also have a same >>> effect. >>> >>> Hope this help, >>> Jongyoul >>> >>> On Mon, Jul 25, 2016 at 4:39 PM, Abul Basar <aba...@einext.com> wrote: >>> >>>> Hi Krishnaprasad, >>>> >>>> Yes, I have played around with that feature. What I found is "spark, >>>> pyspark, r, sql" are grouped together. I use Zeppelin for Spark projects. >>>> So I need to set one of these sub-categories as default. Most often I use >>>> scala for Spark. But I should be able to create a notebook using r (which >>>> essentially is SparkR) as a default. Please let me know if I am missing >>>> something. >>>> >>>> Thanks! >>>> - AB >>>> >>>> On Mon, Jul 25, 2016 at 12:45 PM, Krishnaprasad A S < >>>> krishna.pra...@flytxt.com> wrote: >>>> >>>>> Hi Abul, >>>>> You can change the default interpreter for each notebook through >>>>> zeppelin web UI. >>>>> Go to the notebook and then settings(up right corner), there you can >>>>> find Interpreter binding option. You can reorder the interpreters by drag >>>>> and drop. The first one will be default. >>>>> >>>>> Hope this helps. >>>>> >>>>> Regards, >>>>> Krishnaprasad >>>>> >>>>> On Mon, Jul 25, 2016 at 12:01 PM, Abul Basar <aba...@einext.com> >>>>> wrote: >>>>> >>>>>> I know there is a way to set up a default interpreter at Zepplin >>>>>> using zeppelin.interpreters property in conf/zeppelin-site.xml. The >>>>>> setting is global is nature. >>>>>> >>>>>> But, is it possible to create a notebook level setting for >>>>>> interpreter? For example, in a notebook I want to set the default >>>>>> interpreter at R so that for every code block i do not have to start with >>>>>> "%spark.r", while on another notebook, I want to set the default >>>>>> interpreter as Scala. >>>>>> >>>>>> I am using v0.6 >>>>>> >>>>>> AB >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Krishnaprasad A S >>>>> Lead Engineer >>>>> Flytxt >>>>> Skype: krishnaprasadas >>>>> M: +91 8907209454 | O: +91 471.3082753 | F: +91 471.2700202 >>>>> www.flytxt.com | Visit our blog <http://blog.flytxt.com/> | Follow us >>>>> <http://www.twitter.com/flytxt> | Connect on LinkedIn >>>>> <http://www.linkedin.com/company/22166?goback=%2Efcs_GLHD_flytxt_false_*2_*2_*2_*2_*2_*2_*2_*2_*2_*2_*2_*2=ncsrch_hits> >>>>> >>>> >>>> >>> >>> >>> -- >>> 이종열, Jongyoul Lee, 李宗烈 >>> http://madeng.net >>> >> >> > > > -- > 이종열, Jongyoul Lee, 李宗烈 > http://madeng.net >
Re: Setting default interpreter at notebook level
Hi Krishnaprasad, Yes, I have played around with that feature. What I found is "spark, pyspark, r, sql" are grouped together. I use Zeppelin for Spark projects. So I need to set one of these sub-categories as default. Most often I use scala for Spark. But I should be able to create a notebook using r (which essentially is SparkR) as a default. Please let me know if I am missing something. Thanks! - AB On Mon, Jul 25, 2016 at 12:45 PM, Krishnaprasad A S < krishna.pra...@flytxt.com> wrote: > Hi Abul, > You can change the default interpreter for each notebook through zeppelin > web UI. > Go to the notebook and then settings(up right corner), there you can find > Interpreter binding option. You can reorder the interpreters by drag and > drop. The first one will be default. > > Hope this helps. > > Regards, > Krishnaprasad > > On Mon, Jul 25, 2016 at 12:01 PM, Abul Basar <aba...@einext.com> wrote: > >> I know there is a way to set up a default interpreter at Zepplin using >> zeppelin.interpreters >> property in conf/zeppelin-site.xml. The setting is global is nature. >> >> But, is it possible to create a notebook level setting for interpreter? >> For example, in a notebook I want to set the default interpreter at R so >> that for every code block i do not have to start with "%spark.r", while on >> another notebook, I want to set the default interpreter as Scala. >> >> I am using v0.6 >> >> AB >> > > > > -- > Krishnaprasad A S > Lead Engineer > Flytxt > Skype: krishnaprasadas > M: +91 8907209454 | O: +91 471.3082753 | F: +91 471.2700202 > www.flytxt.com | Visit our blog <http://blog.flytxt.com/> | Follow us > <http://www.twitter.com/flytxt> | Connect on LinkedIn > <http://www.linkedin.com/company/22166?goback=%2Efcs_GLHD_flytxt_false_*2_*2_*2_*2_*2_*2_*2_*2_*2_*2_*2_*2=ncsrch_hits> >
Re: Does zeppelin-0.6 suppport notebooks created with zeppelin-0.5.6?
What i did: copied the $zeppelin_home/notebook from old version to new. All previous notebooks worked in new version. On Sunday, 24 July 2016, Egor Pahomovwrote: > Hi, I'm trying to move from zeppelin 0.5.6 to zeppelin 0.6 and new > zeppelin does not read my old notebooks. It's probably I'm doing something > wrong or it's as designed? > > -- > > > *Sincerely yoursEgor Pakhomov* >