Actually, we do not support jdbc sink yet. The blog post was just an
example :) I agree it is misleading in hindsight.

On Wed, Jun 20, 2018 at 6:09 PM, kant kodali <kanth...@gmail.com> wrote:

> Hi All,
>
> Does Spark Structured Streaming have a JDBC sink or Do I need to use
> ForEachWriter? I see the following code in this link
> <https://databricks.com/blog/2016/07/28/structured-streaming-in-apache-spark.html>
>  and
> I can see that database name can be passed in the connection string,
> however, I wonder how to pass a table name?
>
> inputDF.groupBy($"action", window($"time", "1 hour")).count()
>        .writeStream.format("jdbc")
>        .save("jdbc:mysql//…")
>
>
> Thanks,
> Kant
>

Reply via email to