of , the table would be and your
> query would become Select sum(value) FROM table GROUP BY key;
>
> Otherwise, you will need to get all that data into a single site to
> perform a final aggregation prior to writing to Cassandra.
>
> On Wed, May 15,
Hello Flink Experts.
We have Flink job consuming data from Kafka and ingest it to multi-site
(Azure-east – Azure-west) replicated Cassandra.
Now we have to aggregate data hourly. The problem is that device X can report
once on site A and once on site B. This means that some messages for that
ectors and libraries
in the proper scope.
If you use one of the newer Flink quickstart projects, this should
automatically happen.
Best,
Stephan
On Sun, Feb 18, 2018 at 3:38 PM, Melekh, Gregory
wrote:
> Hi all.
> I have streaming job
Hi all.
I have streaming job that reads from Kafka 0.10 manipulates data and write to
Cassandra (Tuple18)
Also this job has window and CustomReducer class involved to reduce data.
If groupedBy_windowed_stream DataStream defined with 9 fields (Tuple9)
compilation takes 5 seconds.
In current (Tuple