Hi Timo, Hi Gordon,

thx for the reply! I checked the connection from both clusters to each
other, and i can telnet to the 9300 port of flink, so i think the
connection is not an issue here.

We are currently using in our live env a custom elasticsearch connector,
which used some extra lib's deployed on the cluster. i found one lucene lib
and deleted it (since all dependencies should be in the flink job jar), but
that unfortunately did not help neither ...

Cheers
Fabian


--

*Fabian WollertData Engineering*
*Technology*

E-Mail: fabian.woll...@zalando.de
Location: ZMAP <http://zmap.zalando.net/?q=fabian.woll...@zalando.de>

2017-07-13 13:46 GMT+02:00 Timo Walther <twal...@apache.org>:

> Hi Fabian,
>
> I loop in Gordon. Maybe he knows whats happening here.
>
> Regards,
> Timo
>
>
> Am 13.07.17 um 13:26 schrieb Fabian Wollert:
>
> Hi everyone,
>
> I'm trying to make use of the new Elasticsearch Connector
> <https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/elasticsearch.html>.
> I got a version running locally (with ssh tunnels to my Elasticsearch
> cluster in AWS) in my IDE, I see the data in Elasticsearch written
> perfectly, as I want it. As soon as I try to run this on our dev cluster
> (Flink 1.3.0, running in the same VPC like ) though, i get the following
> error message (in the sink):
>
> java.lang.NoSuchFieldError: LUCENE_5_5_0
> at org.elasticsearch.Version.<clinit>(Version.java:295)
> at org.elasticsearch.client.transport.TransportClient$
> Builder.build(TransportClient.java:129)
> at org.apache.flink.streaming.connectors.elasticsearch2.
> Elasticsearch2ApiCallBridge.createClient(Elasticsearch2ApiCallBridge.
> java:65)
> at org.apache.flink.streaming.connectors.elasticsearch.
> ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:272)
> at org.apache.flink.api.common.functions.util.FunctionUtils.
> openFunction(FunctionUtils.java:36)
> at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.
> open(AbstractUdfStreamOperator.java:111)
> at org.apache.flink.streaming.runtime.tasks.StreamTask.
> openAllOperators(StreamTask.java:375)
> at org.apache.flink.streaming.runtime.tasks.StreamTask.
> invoke(StreamTask.java:252)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
> at java.lang.Thread.run(Thread.java:748)
>
> I first thought that this has something to do with mismatched versions,
> but it happens to me with Elasticsearch 2.2.2 (bundled with Lucene 5.4.1)
> and Elasticsearch 2.3 (bundled with Lucene 5.5.0).
>
> Can someone point to what exact version conflict is happening here (or
> where to investigate further)? Currently my set up looks like everything is
> actually running with Lucene 5.5.0, so I'm wondering where that error
> message is exactly coming from. And also why it is running locally, but not
> in the cluster. I'm still investigating if this is a general connection
> issue from the Flink cluster to the ES cluster, but that would be
> surprising, and also that error message would be then misleading ....
>
> Cheers
> Fabian
>
> --
> *Fabian Wollert*
> *Senior Data Engineer*
>
> *POSTAL ADDRESS*
> *Zalando SE*
> *11501 Berlin*
>
> *OFFICE*
> *Zalando SE*
> *Charlottenstraße 4*
> *10969 Berlin*
> *Germany*
>
> *Email: fabian.woll...@zalando.de <fabian.woll...@zalando.de>*
> *Web: corporate.zalando.com <http://corporate.zalando.com>*
> *Jobs: jobs.zalando.de <http://jobs.zalando.de>*
>
> *Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin*
> *Company registration: Amtsgericht Charlottenburg, HRB 158855 B*
> *VAT registration number: DE 260543043*
> *Management Board: Robert Gentz, David Schneider, Rubin Ritter*
> *Chairperson of the Supervisory Board: Lothar Lanz*
> *Registered office: Berlin*
>
>
>

Reply via email to