Hi

I have this living RDBMS and I d'like to apply a spark job on several
tables once new data get in.

I could run batch spark jobs thought cron jobs every minutes. But the
job takes time and resources to begin (sparkcontext, yarn....)

I wonder if I could run one instance of a spark streaming job to save
those resources. However I haven't seen about structured streaming from
jdbc source in the documentation.

Any recommendation ?


-- 
nicolas

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to