I an very very new to both spark and spark structured streaming. I have to
write an application that receives a very very large csv files in hdfs
folder. the app must take the file and on each row it must read from
Cassandra data base some rows (not many rows will be returned for each row
in csv). On each row it red it must preform some simple calculations and
update the rows it red with the results and save the updated rows to
Cassandra.

I have spark version 2.4 and must use python.

Is this a suitable scenario for spark structured streaming?

thanks



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to