e SQL queries on the temporary view
> result_df = (spark.sql("""
> SELECT
> window.start, window.end, provinceId, sum(payAmount) as
> totalPayAmount
> FROM michboy
> GROUP BY provinceId, window('createTime', '1 hour', '30 minutes')
> ORDER BY w
Hello!
I am attempting to write a streaming pipeline that would consume data from a
Kafka source, manipulate the data, and then write results to a downstream sink
(Kafka, Redis, etc). I want to write fully formed SQL instead of using the
function API that Spark offers. I read a few guides on