Now my spark job can perform sql operations against database table. Next I want 
to combine  that with streaming context, so switching to readStream() function. 
But after job submission, spark throws

    Exception in thread "main" java.lang.UnsupportedOperationException: Data 
source jdbc does not support streamed reading

That looks like sparkSession.readSteam.format("jdbc")... jdbc doesn't support 
streaming

    val sparkSession = SparkSession.builder().appName("my-test").getOrCreate()
    import session.implicits._
    val df = sparkSession.readStream.format("jdbc")...load()
    // other operations against df

Checking the example - 
https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/sql/streaming/StructuredSessionization.scala

Also searching on the internet, I don't see any examples that close to my need. 
Any pointers or docs that may talk about this or code snippet that may 
illustrate such purpose?

Thanks

Reply via email to