When I am trying to build and run streaming wordcount example(example in
the flink github), I am getting the following error
StreamingWordCount.java:[56,59] incompatible types:
org.apache.flink.api.java.operators.DataSource cannot be
converted to
org.apache.flink.streaming.api.datastream.DataStr
erent index.
> If you want to have more Elasticsearch sink instances for a specific id,
> what you can do is split the stream, splitting out ids that you know to
> have higher throughput, and pipeline that split stream to an Elasticsearch
> Sink with higher parallelism.
>
> Gordo
Can some one help me in figuring out how to implement in flink.
I have to create a pipeline Kafka->flink->elasticsearch. I have high
throughput data coming into Kafka. All messages in Kafka have a key called
'id' and value is a integer that ranges 1 to N. N is dynamic with max value
as 100. The n
I am coming from Apache Storm world. I am planning to switch from
storm to flink. I was reading Flink documentation but, I couldn't find some
requirements in Flink which was present in Storm.
I need to have a streaming pipeline Kafka->flink-> ElasticSearch. In
storm, I have seen that I