you means sparkSession.streams.awaitAnyTermination()? May i have your
code ? or you can see the following:
my demo code:
val hourDevice =
beginTimeDevice.groupBy($"subsId",$"eventBeginHour",$"serviceType")
.agg("duration" -> "sum").withColumnRenamed("sum(duration)",
"durationForHo
If I use for each function, then I may need to use custom Kafka stream writer
right ?!
And I might not be able to use default writestream.format(Kafka) method ?!
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
you can use *foreach* sink to achieve the logic you want.
act_coder 于2020年11月4日周三 下午9:56写道:
> I have a scenario where I would like to save the same streaming dataframe
> to
> two different streaming sinks.
>
> I have created a streaming dataframe which I need to send to both Kafka
> topic and d