Yes, it looks like the connector only creates the connection once when it starts and fails if the host is no longer reachable. It should be possible to catch that failure and try to re-open the connection. I opened a JIRA for this issue (FLINK-3857).
Would you like to implement the improvement? 2016-05-02 9:38 GMT+02:00 Sendoh <unicorn.bana...@gmail.com>: > Hi, > > When using Elasticsearch connector, Is there a way to reflect IP change of > Elasticsearch cluster? > We use DNS of Elasticsearch in data sink, e.g. elasticsearch-dev.foo.de. > However, when we replace the old Elasticsearch cluster with a new one, the > Elasticsearch connector cannot write into the new one due to IP change. > > This is an important feature for us because we don't have to restart Flink > job. The reason might be Flink-Elasticsearch2 connector looks up for the IP > from DNS only once. > Thus, one way might be when the response of writing into Elasticsearch says > not success, let Flink environment create a new data sink? > > We use Flink Elasticsearch-connector2(for Elasticsearch2.x) on AWS > > Best, > > Sendoh > > > > -- > View this message in context: > http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Any-way-for-Flink-Elasticsearch-connector-reflecting-IP-change-of-Elasticsearch-cluster-tp6597.html > Sent from the Apache Flink User Mailing List archive. mailing list archive > at Nabble.com. >