If your code doesn't require "end to end exactly-once" then you could
leverage foreachBatch which enables you to use batch sink.
If your code requires "end to end exactly-once", then well, that's the
different story. I'm not familiar with BigQuery and even have no idea how
sink is implemented, but
With the ols spark streaming (example in Scala), this would have been
easier through RDD. You could read data
val dstream = KafkaUtils.createDirectStream[String, String, StringDecoder,
StringDecoder](streamingContext, kafkaParams, topicsValue)
dstream.foreachRDD
{ pricesRDD =>
if