Hello,

I have a case where I am continuously getting a bunch sensor-data which is 
being stored into a Cassandra table (through Kafka). Every week or so, I want 
to manually enter additional data into the system - and I want this to trigger 
some calculations merging the manual entered data, and the weeks worth of 
streaming sensor-data.

Is there a way to make dynamic Cassandra queries based on data coming into 
Spark?

example: Pressure sensors are being continuously stored into Cassandra, and I 
enter a weeks worth of temperatures into the system at the end of the week (1 
day/row at a time).
I want each of these rows to trigger queries to Cassandra to get the pressures 
for every specific day, and do some calculations on this.

I have been looking at using Structured Streaming with the 
Cassandra-spark-connector, but I cant find  a way to take data from a row in 
structured streaming into account on the query being made. And I seem to have 
to query for 'everything', and then filter in Spark.

Any ideas or tips for how to solve this?

Reply via email to