[ANNOUNCE] Apache Zeppelin 0.10.0 is released, Spark on Zeppelin Improved

2021-08-26 Thread Jeff Zhang
Hi Spark users,

We (Zeppelin community) are very excited to announce Apache Zeppelin
notebook 0.10.0 is officially released. Here’s the main features of Spark
on Zeppelin:

   - Support multiple versions of Spark - You can run different versions of
   Spark in one Zeppelin instance
   - Support multiple versions of Scala - You can run different Scala
   versions (2.10/2.11/2.12) of Spark in on Zeppelin instance
   - Support multiple languages - Scala, SQL, Python, R are supported,
   besides that you can also collaborate across languages, e.g. you can write
   Scala UDF and use it in PySpark
   - Support multiple execution modes - Local | Standalone | Yarn | K8s
   - Interactive development - Interactive development user experience
   increase your productivity
   - Inline Visualization - You can visualize Spark Dataset/DataFrame vis
   Python/R’s plotting libraries, and even you can make SparkR Shiny app in
   Zeppelin
   - Multi-tenancy - Multiple user can work in one Zeppelin instance
   without affecting each other.
   - Rest API Support - You can not only submit Spark job via Zeppelin
   notebook UI, but also can do that via its rest api (You can use Zeppelin as
   Spark job server).

And the easiest way to try Zeppelin is via docker container, check out this
link for how to use Spark in Zeppelin docker container.
https://zeppelin.apache.org/docs/0.10.0/interpreter/spark.html#play-spark-in-zeppelin-docker

Spark on Zeppelin doc:
https://zeppelin.apache.org/docs/0.10.0/interpreter/spark.html
Download it here: https://zeppelin.apache.org/download.html

-- 
Best Regards

Jeff Zhang
Twitter: zjffdu


Re: Processing Multiple Streams in a Single Job

2021-08-26 Thread Mich Talebzadeh
Hi ND,

Within the same Spark job you can handle two topics simultaneously SSS. Is
that what you are implying?

HTH



   view my Linkedin profile




*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Tue, 24 Aug 2021 at 23:37, Artemis User  wrote:

> Is there a way to run multiple streams in a single Spark job using
> Structured Streaming?  If not, is there an easy way to perform inter-job
> communications (e.g. referencing a dataframe among concurrent jobs) in
> Spark?  Thanks a lot in advance!
>
> -- ND
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>