Re: Zeppelin 0.8.2 New Spark Interpreter
Are there clear instructions how to use spark.jars.packages properties? For instance, if I want to depend on bintray repo https://dl.bintray.com/comp-bio-aging/main with "group.research.aging:spark-extensions_2.11:0.0.7.2" as a dependency, what should I do with newintepreter? On 2019/10/12 01:18:09, Jeff Zhang wrote: > Glad to hear that. > > Mark Bidewell 于2019年10月12日周六 上午1:30写道: > > > Just wanted to say "thanks"! Using spark.jars.packages, etc worked great! > > > > On Fri, Oct 11, 2019 at 9:45 AM Jeff Zhang wrote: > > > >> That's right, document should also be updated > >> > >> Mark Bidewell 于2019年10月11日周五 下午9:28写道: > >> > >>> Also the interpreter setting UI is still listed as the first way to > >>> handle dependencies in the documentation - Maybe it should be marked as > >>> deprecated? > >>> > >>> http://zeppelin.apache.org/docs/0.8.2/interpreter/spark.html > >>> > >>> > >>> On Thu, Oct 10, 2019 at 9:58 PM Jeff Zhang wrote: > >>> > It looks like many users still get used to specify spark dependencies > in interpreter setting UI, spark.jars and spark.jars.packages seems too > difficult to understand and not transparent, so I create ticket > https://issues.apache.org/jira/browse/ZEPPELIN-4374 that user can > still set dependencies in interpreter setting UI. > > Jeff Zhang 于2019年10月11日周五 上午9:54写道: > > > Like I said above, try to set them via spark.jars and > > spark.jars.packages. > > > > Don't set them here > > > > [image: image.png] > > > > > > Mark Bidewell 于2019年10月11日周五 上午9:35写道: > > > >> I was specifying them in the interpreter settings in the UI. > >> > >> On Thu, Oct 10, 2019 at 9:30 PM Jeff Zhang wrote: > >> > >>> How do you specify your spark interpreter dependencies ? You need to > >>> specify it via property spark.jars or spark.jars.packages for > >>> non-local > >>> model. > >>> > >>> Mark Bidewell 于2019年10月11日周五 上午3:45写道: > >>> > I am running some initial tests of Zeppelin 0.8.2 and I am seeing > some weird issues with dependencies. When I use the old interpreter, > everything works as expected. When I use the new interpreter, > classes in > my interpreter dependencies cannot be resolved when connecting to a > master > that is not local[*], I did not encounter issues with either > interpreter > on 0.8.1. > > Has anyone else seen this? > > Thanks! > > -- > Mark Bidewell > http://www.linkedin.com/in/markbidewell > > >>> > >>> > >>> -- > >>> Best Regards > >>> > >>> Jeff Zhang > >>> > >> > >> > >> -- > >> Mark Bidewell > >> http://www.linkedin.com/in/markbidewell > >> > > > > > > -- > > Best Regards > > > > Jeff Zhang > > > > > -- > Best Regards > > Jeff Zhang > > >>> > >>> > >>> -- > >>> Mark Bidewell > >>> http://www.linkedin.com/in/markbidewell > >>> > >> > >> > >> -- > >> Best Regards > >> > >> Jeff Zhang > >> > > > > > > -- > > Mark Bidewell > > http://www.linkedin.com/in/markbidewell > > > > > -- > Best Regards > > Jeff Zhang >
Re: accessing sql cell from another cell
Indeed, that is what I want, thank you! On Fri, Aug 17, 2018, 03:42 Jeff Zhang wrote: > > May be ticket https://issues.apache.org/jira/browse/ZEPPELIN-3617 is what > you need > > This ticket will save the result of paragraph to resource of zeppelin, so > that you can use z.get to get the this data. > > > > 于2018年8月17日周五 上午1:55写道: > >> Hi Anton >> >> Just for clarification: >> The resultset of a %sql cell (which is automatically being displayed as a >> table or chart) shall be used in a Scala or pays park cell, correct? >> >> Any reason you’re not using the spark cell right away and display the >> result separately? >> >> Best regards >> Alex >> >> >> >> Von meinem iPhone gesendet >> >> > Am 16.08.2018 um 19:46 schrieb antonkulaga@ > antonkul...@gmail.com>: >> > >> > Hello everybody, >> > >> > I find it very convenient to pass value between interpreters with >> Zeppelin context, however, I do not understand how I should pass results of >> SQL queries in %sql cell to my spark scala cell. Could you please explain >> how to do it? In the docs there are only examples of passing between Scala >> and python and no examples of passing between %sql and spark. > >
Re: Cannot make alluxio work
Checked alluxio-master logs, they tell: " data_alluxio-master.1.jasmo3qhjjs5@web 2018-08-16 18:18:41,298 ERROR TThreadPoolServer - Thrift error occurred during processing of message. bigdata_alluxio-master.1.jasmo3qhjjs5@web org.apache.thrift.protocol.TProtocolException: Missing version in readMessageBegin, old client? bigdata_alluxio-master.1.jasmo3qhjjs5@web at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228) bigdata_alluxio-master.1.jasmo3qhjjs5@web at org.apache.thrift.TMultiplexedProcessor.process(TMultiplexedProcessor.java:92) bigdata_alluxio-master.1.jasmo3qhjjs5@web at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) bigdata_alluxio-master.1.jasmo3qhjjs5@web at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) bigdata_alluxio-master.1.jasmo3qhjjs5@web at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) bigdata_alluxio-master.1.jasmo3qhjjs5@web at java.lang.Thread.run(Thread.java:748) " However, I added alluxio-client to the intepreter dependencies, is it not supposed to work with just the client? Sincerely, Anton Kulaga Bioinformatician at Computational Biology of Aging Group 296 Splaiul Independentei, Bucharest, Romania, 060031 http://aging-research.group On Thu, 16 Aug 2018 at 20:48, antonkul...@gmail.com wrote: > I am using latest zeppelin and latest alluxio. I successfully configured > alluxio in my docker stack ( > https://github.com/antonkulaga/bigdata-docker/blob/master/bigdata-extended.yml > ), however I fail to access it with zeppelin alluxio intepreter. I > constantly get: > ``` > java.lang.IllegalAccessError: tried to access method > alluxio.Configuration.()V from class > org.apache.zeppelin.alluxio.AlluxioInterpreter at > org.apache.zeppelin.alluxio.AlluxioInterpreter.open(AlluxioInterpreter.java:78) > at > org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69) > at > org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:617) > at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at > org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > ``` > even though I configured the master URL and added alluxio-1.8.0-client.jar > to the intepreter dependencies. >