Thanks Daniel.  I guess you were suggesting using DStream/RDD. Would it be possible to use structured streaming/DataFrames for multi-source streaming?  In addition, we really need each stream data ingestion to be asynchronous or non-blocking...  thanks!

On 8/24/21 9:27 PM, daniel williams wrote:
Yeah. Build up the streams as a collection and map that query to the start() invocation and map those results to awaitTermination() or whatever other blocking mechanism you’d like to use.

On Tue, Aug 24, 2021 at 4:37 PM Artemis User <arte...@dtechspace.com <mailto:arte...@dtechspace.com>> wrote:

    Is there a way to run multiple streams in a single Spark job using
    Structured Streaming?  If not, is there an easy way to perform
    inter-job
    communications (e.g. referencing a dataframe among concurrent
    jobs) in
    Spark?  Thanks a lot in advance!

    -- ND

    ---------------------------------------------------------------------
    To unsubscribe e-mail: user-unsubscr...@spark.apache.org
    <mailto:user-unsubscr...@spark.apache.org>

--
-dan

Reply via email to