Hi Tathagata,
Is there any limitation of below code while writing to multiple file ?
val inputdf:DataFrame =
sparkSession.readStream.schema(schema).format("csv").option("delimiter",",").csv("src/main/streamingInput")
query1 =
Hi Chandan/Jürgen,
I had tried through a native code having single input data frame with
multiple sinks as :
Spark provides a method called awaitAnyTermination() in
StreamingQueryManager.scala which provides all the required details to
handle the query processed by spark.By observing
Hi Jürgen,
Have you found any solution or workaround for multiple sinks from single
source as we cannot process multiple sinks at a time ?
As i also has a scenario in ETL where we are using clone component having
multiple sinks with single input stream dataframe.
Can you keep posting once you
Hi,
I am new to spark-sql. I am getting below mapping details in JdbcUtils.scala
as:
*case StringType => Option(JdbcType("TEXT", java.sql.Types.CLOB))* in line
number 125.
which says SringType will map with Jdbc database type as "TEXT" having jdbc
null type as CLOB , which internally takes 2005