[jira] [Commented] (FLINK-34663) flink-opensearch connector Unable to parse response body for Response

2024-03-21 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829555#comment-17829555
 ] 

Andriy Redko commented on FLINK-34663:
--

[~wgendy] apologies for the delay, it seems like the only path to move forward 
is to have dedicated support for OSv1 and OSv2 (as for Elasticsearch), that 
should be fixed by FLINK-33859 (expecting to get it merged soon), thank you

> flink-opensearch connector Unable to parse response body for Response
> -
>
> Key: FLINK-34663
> URL: https://issues.apache.org/jira/browse/FLINK-34663
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 1.18.1
> Environment: Docker-Compose:
> Flink 1.18.1 - Java11
> OpenSearch 2.12.0
> Flink-Sql-Opensearch-connector (flink 1.18.1 → Os 1.3)
>Reporter: wael shehata
>Priority: Major
> Attachments: image-2024-03-14-00-10-40-982.png
>
>
> I`m trying to use flink-sql-opensearch connector to sink stream data to 
> OpenSearch via Flink …
> After submitting the Job to Flink cluster successfully , the job runs 
> normally for 30sec and create the index with data … then it fails with the 
> following message:
> _*org.apache.flink.util.FlinkRuntimeException: Complete bulk has failed… 
> Caused by: java.io.IOException: Unable to parse response body for Response*_
> _*{requestLine=POST /_bulk?timeout=1m HTTP/1.1, 
> host=[http://172.20.0.6:9200|http://172.20.0.6:9200/], response=HTTP/1.1 200 
> OK}*_
> at 
> org.opensearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1942)
> at 
> org.opensearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:662)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:396)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:390)
> at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
> at 
> org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:182)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
> at 
> org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:87)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:40)
> at 
> org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
> at 
> org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
> … 1 more
> *Caused by: java.lang.NullPointerException*
> *at java.base/java.util.Objects.requireNonNull(Unknown Source)*
> *at org.opensearch.action.DocWriteResponse.(DocWriteResponse.java:140)*
> *at org.opensearch.action.index.IndexResponse.(IndexResponse.java:67) …*
> It seems that this error is common but without any solution …
> the flink connector despite it was built for OpenSearch 1.3 , but it still 
> working in sending and creating index to OpenSearch 2.12.0 … but this error 
> persists with all OpenSearch versions greater than 1.13 …
> *Opensearch support reply was:*
> *"this is unexpected, could you please create an issue here [1], the issue is 
> caused by _type property that has been removed in 2.x"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34663) flink-opensearch connector Unable to parse response body for Response

2024-03-13 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826884#comment-17826884
 ] 

Andriy Redko commented on FLINK-34663:
--

Thanks for trying it out, [~wgendy] , the pull request is not merged yet (I've 
just mentioned that there is an attempt to have separate OS v1 and v2 support), 
I will try to look into the possible options issue shortly.

> flink-opensearch connector Unable to parse response body for Response
> -
>
> Key: FLINK-34663
> URL: https://issues.apache.org/jira/browse/FLINK-34663
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 1.18.1
> Environment: Docker-Compose:
> Flink 1.18.1 - Java11
> OpenSearch 2.12.0
> Flink-Sql-Opensearch-connector (flink 1.18.1 → Os 1.3)
>Reporter: wael shehata
>Priority: Major
> Attachments: image-2024-03-14-00-10-40-982.png
>
>
> I`m trying to use flink-sql-opensearch connector to sink stream data to 
> OpenSearch via Flink …
> After submitting the Job to Flink cluster successfully , the job runs 
> normally for 30sec and create the index with data … then it fails with the 
> following message:
> _*org.apache.flink.util.FlinkRuntimeException: Complete bulk has failed… 
> Caused by: java.io.IOException: Unable to parse response body for Response*_
> _*{requestLine=POST /_bulk?timeout=1m HTTP/1.1, 
> host=[http://172.20.0.6:9200|http://172.20.0.6:9200/], response=HTTP/1.1 200 
> OK}*_
> at 
> org.opensearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1942)
> at 
> org.opensearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:662)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:396)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:390)
> at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
> at 
> org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:182)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
> at 
> org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:87)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:40)
> at 
> org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
> at 
> org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
> … 1 more
> *Caused by: java.lang.NullPointerException*
> *at java.base/java.util.Objects.requireNonNull(Unknown Source)*
> *at org.opensearch.action.DocWriteResponse.(DocWriteResponse.java:140)*
> *at org.opensearch.action.index.IndexResponse.(IndexResponse.java:67) …*
> It seems that this error is common but without any solution …
> the flink connector despite it was built for OpenSearch 1.3 , but it still 
> working in sending and creating index to OpenSearch 2.12.0 … but this error 
> persists with all OpenSearch versions greater than 1.13 …
> *Opensearch support reply was:*
> *"this is unexpected, could you please create an issue here [1], the issue is 
> caused by _type property that has been removed in 2.x"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34663) flink-opensearch connector Unable to parse response body for Response

2024-03-13 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826845#comment-17826845
 ] 

Andriy Redko commented on FLINK-34663:
--

[https://github.com/apache/flink-connector-opensearch/pull/38] would solve 
this, I will try to find out if we could make it work with 1.x client as well

> flink-opensearch connector Unable to parse response body for Response
> -
>
> Key: FLINK-34663
> URL: https://issues.apache.org/jira/browse/FLINK-34663
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 1.18.1
> Environment: Docker-Compose:
> Flink 1.18.1 - Java11
> OpenSearch 2.12.0
> Flink-Sql-Opensearch-connector (flink 1.18.1 → Os 1.3)
>Reporter: wael shehata
>Priority: Major
>
> I`m trying to use flink-sql-opensearch connector to sink stream data to 
> OpenSearch via Flink …
> After submitting the Job to Flink cluster successfully , the job runs 
> normally for 30sec and create the index with data … then it fails with the 
> following message:
> _*org.apache.flink.util.FlinkRuntimeException: Complete bulk has failed… 
> Caused by: java.io.IOException: Unable to parse response body for Response*_
> _*{requestLine=POST /_bulk?timeout=1m HTTP/1.1, 
> host=[http://172.20.0.6:9200|http://172.20.0.6:9200/], response=HTTP/1.1 200 
> OK}*_
> at 
> org.opensearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1942)
> at 
> org.opensearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:662)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:396)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:390)
> at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
> at 
> org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:182)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
> at 
> org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:87)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:40)
> at 
> org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
> at 
> org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
> … 1 more
> *Caused by: java.lang.NullPointerException*
> *at java.base/java.util.Objects.requireNonNull(Unknown Source)*
> *at org.opensearch.action.DocWriteResponse.(DocWriteResponse.java:140)*
> *at org.opensearch.action.index.IndexResponse.(IndexResponse.java:67) …*
> It seems that this error is common but without any solution …
> the flink connector despite it was built for OpenSearch 1.3 , but it still 
> working in sending and creating index to OpenSearch 2.12.0 … but this error 
> persists with all OpenSearch versions greater than 1.13 …
> *Opensearch support reply was:*
> *"this is unexpected, could you please create an issue here [1], the issue is 
> caused by _type property that has been removed in 2.x"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33020) OpensearchSinkTest.testAtLeastOnceSink timed out

2023-09-04 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761839#comment-17761839
 ] 

Andriy Redko commented on FLINK-33020:
--

[~martijnvisser] yes, I think it happens from time to time because of the async 
flow this test exercise, it might be flaky from sometimes, will take a look 
shortly.

> OpensearchSinkTest.testAtLeastOnceSink timed out
> 
>
> Key: FLINK-33020
> URL: https://issues.apache.org/jira/browse/FLINK-33020
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.2
>Reporter: Martijn Visser
>Priority: Blocker
>
> https://github.com/apache/flink-connector-opensearch/actions/runs/6061205003/job/16446139552#step:13:1029
> {code:java}
> Error:  Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.837 
> s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.opensearch.OpensearchSinkTest
> Error:  
> org.apache.flink.streaming.connectors.opensearch.OpensearchSinkTest.testAtLeastOnceSink
>   Time elapsed: 5.022 s  <<< ERROR!
> java.util.concurrent.TimeoutException: testAtLeastOnceSink() timed out after 
> 5 seconds
>   at 
> org.junit.jupiter.engine.extension.TimeoutInvocation.createTimeoutException(TimeoutInvocation.java:70)
>   at 
> org.junit.jupiter.engine.extension.TimeoutInvocation.proceed(TimeoutInvocation.java:59)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>   at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
>   at 
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> 

[jira] [Commented] (FLINK-32891) Opensearch SQL connector crash job on upsert from multiple sources (409 version conflict)

2023-08-17 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755686#comment-17755686
 ] 

Andriy Redko commented on FLINK-32891:
--

Thanks [~thebranchnotmerged] , I think it could be useful enhancements, 
[~martijnvisser] if you have no objections, please assign it to me

> Opensearch SQL connector crash job on upsert from multiple sources (409 
> version conflict)
> -
>
> Key: FLINK-32891
> URL: https://issues.apache.org/jira/browse/FLINK-32891
> Project: Flink
>  Issue Type: Bug
>Affects Versions: opensearch-1.0.1
>Reporter: Kobe Fitussi
>Priority: Critical
>
> Using Opensearch SQL Connector for flink , An attempt to perform an Upsert 
> for the same document ID from multiple jobs at the same time has resulted in 
> job crush with 409 version conflict error message, In our environment we 
> cannot guarantee that messages will arrive separately.
> I suggest in RowOpensearchEmitter.processUpsert() , when an UpdateRequest 
> object is being created it will also call an existing method of the 
> UpdateRequest "retryOnConflict(int retries)" which is designed to remedy this 
> issue where 'retries' will be set by OpensearchConnectorOptions class , 
> default 0.
> suggested parameter name : 'sink.upsert.retry-on-conflict' 
> Also, I do not see evidence of bulk-retries being Effective in this case



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30998) Add optional exception handler to flink-connector-opensearch

2023-08-07 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17751826#comment-17751826
 ] 

Andriy Redko commented on FLINK-30998:
--

Thank you for contribution, [~leonidilyevsky] ! Please refer to [1] that 
describes the release process for connectors, short brief summary below (you 
probably could nominate [~martijnvisser] since I am not a committer):

> Anybody can propose a release on the dev@ mailing list, giving a solid 
> argument and nominating a committer as the Release Manager (including 
> themselves).

Thank you.

[1] 
https://cwiki.apache.org/confluence/display/FLINK/Creating+a+flink-connector+release

> Add optional exception handler to flink-connector-opensearch
> 
>
> Key: FLINK-30998
> URL: https://issues.apache.org/jira/browse/FLINK-30998
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: 1.16.1
>Reporter: Leonid Ilyevsky
>Assignee: Leonid Ilyevsky
>Priority: Major
>  Labels: pull-request-available
> Fix For: opensearch-1.0.2
>
>
> Currently, when there is a failure coming from Opensearch, the 
> FlinkRuntimeException is thrown from OpensearchWriter.java code (line 346). 
> This makes the Flink pipeline fail. There is no way to handle the exception 
> in the client code.
> I suggest to add an option to set a failure handler, similar to the way it is 
> done in elasticsearch connector. This way the client code has a chance to 
> examine the failure and handle it.
> Here is the use case example when it will be very useful. We are using 
> streams on Opensearch side, and we are setting our own document IDs. 
> Sometimes these IDs are duplicated; we need to ignore this situation and 
> continue (this way it works for us with Elastisearch).
> However, with opensearch connector, the error comes back, saying that the 
> batch failed (even though most of the documents were indexed, only the ones 
> with duplicated IDs were rejected), and the whole flink job fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32357) Elasticsearch v3.0 won't compile when testing against Flink 1.17.1

2023-06-15 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733046#comment-17733046
 ] 

Andriy Redko commented on FLINK-32357:
--

Yes, we certainly had it !

> Elasticsearch v3.0 won't compile when testing against Flink 1.17.1
> --
>
> Key: FLINK-32357
> URL: https://issues.apache.org/jira/browse/FLINK-32357
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Martijn Visser
>Assignee: Sergey Nuyanzin
>Priority: Major
>
> {code:java}
> [INFO] 
> 
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) 
> on project flink-connector-elasticsearch-base: Execution default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
> org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' 
> failed to discover tests: 
> com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(Ljava/lang/Class;)Ljava/lang/Object;
>  -> [Help 1]
> {code}
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/5277721611/jobs/9546112876#step:13:159



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31856) Add support for Opensearch Connector REST client customization

2023-06-14 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732537#comment-17732537
 ] 

Andriy Redko commented on FLINK-31856:
--

Thanks [~mhjtrifork] , made it in, should be easier to customize, thank you

> Add support for Opensearch Connector REST client customization
> --
>
> Key: FLINK-31856
> URL: https://issues.apache.org/jira/browse/FLINK-31856
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.0
>Reporter: Michael Hempel-Jørgensen
>Assignee: Andriy Redko
>Priority: Minor
>  Labels: pull-request-available
> Fix For: opensearch-1.1.0, opensearch-1.0.2
>
>
> It is not currently possible to customise the Opensearch REST client in all 
> of the connectors.
> We are currently using the using the OpensearchSink in 
> [connector/opensearch/sink/OpensearchSink.java|https://github.com/apache/flink-connector-opensearch/blob/main/flink-connector-opensearch/src/main/java/org/apache/flink/connector/opensearch/sink/OpensearchSink.java]
>  and need to be able to authenticate/authorise with Opensearch using OAuth2 
> and therefore need to be able to pass a bearer token to the bulk index calls.
> The access token will expire and change during the jobs lifetime it must be 
> possible to handle this, i.e. giving a token at when building the sink is not 
> enough.
> For reference see the mailing list discussion: 
> https://lists.apache.org/thread/9rvwhzjwzm6yq9mg481sdxqx9nqr1x5g



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31856) Add support for Opensearch Connector REST client customization

2023-06-13 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17732076#comment-17732076
 ] 

Andriy Redko commented on FLINK-31856:
--

Thanks a lot [~mhjtrifork] , yeah, sadly there is no way to compose 
HttpClientConfigCallback, duplicating is not great, what we could do is:

 - open up {color:#00}DefaultRestClientFactory
- have an overridable {color}{color:#00}configureHttpClientBuilder() 
method{color}

{color:#00}In this case, you could inherit from DefaultRestClientFactory 
and override / tailor the default {color}{color:#00}HttpAsyncClientBuilder 
customizations, would it help?{color}

> Add support for Opensearch Connector REST client customization
> --
>
> Key: FLINK-31856
> URL: https://issues.apache.org/jira/browse/FLINK-31856
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.0
>Reporter: Michael Hempel-Jørgensen
>Assignee: Andriy Redko
>Priority: Minor
>  Labels: pull-request-available
>
> It is not currently possible to customise the Opensearch REST client in all 
> of the connectors.
> We are currently using the using the OpensearchSink in 
> [connector/opensearch/sink/OpensearchSink.java|https://github.com/apache/flink-connector-opensearch/blob/main/flink-connector-opensearch/src/main/java/org/apache/flink/connector/opensearch/sink/OpensearchSink.java]
>  and need to be able to authenticate/authorise with Opensearch using OAuth2 
> and therefore need to be able to pass a bearer token to the bulk index calls.
> The access token will expire and change during the jobs lifetime it must be 
> possible to handle this, i.e. giving a token at when building the sink is not 
> enough.
> For reference see the mailing list discussion: 
> https://lists.apache.org/thread/9rvwhzjwzm6yq9mg481sdxqx9nqr1x5g



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31856) Add support for Opensearch Connector REST client customization

2023-06-06 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729724#comment-17729724
 ] 

Andriy Redko commented on FLINK-31856:
--

Thanks [~mhjtrifork] , probably building locally is a way to go, I am not sure 
the snapshots of the external connectors are published now.

> Add support for Opensearch Connector REST client customization
> --
>
> Key: FLINK-31856
> URL: https://issues.apache.org/jira/browse/FLINK-31856
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.0
>Reporter: Michael Hempel-Jørgensen
>Assignee: Andriy Redko
>Priority: Minor
>  Labels: pull-request-available
>
> It is not currently possible to customise the Opensearch REST client in all 
> of the connectors.
> We are currently using the using the OpensearchSink in 
> [connector/opensearch/sink/OpensearchSink.java|https://github.com/apache/flink-connector-opensearch/blob/main/flink-connector-opensearch/src/main/java/org/apache/flink/connector/opensearch/sink/OpensearchSink.java]
>  and need to be able to authenticate/authorise with Opensearch using OAuth2 
> and therefore need to be able to pass a bearer token to the bulk index calls.
> The access token will expire and change during the jobs lifetime it must be 
> possible to handle this, i.e. giving a token at when building the sink is not 
> enough.
> For reference see the mailing list discussion: 
> https://lists.apache.org/thread/9rvwhzjwzm6yq9mg481sdxqx9nqr1x5g



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31856) Add support for Opensearch Connector REST client customization

2023-06-01 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17728371#comment-17728371
 ] 

Andriy Redko commented on FLINK-31856:
--

[~mhjtrifork] we have a pull request opened, mind give it a try to see if your 
particular use case is unblocked with the change? thank you

> Add support for Opensearch Connector REST client customization
> --
>
> Key: FLINK-31856
> URL: https://issues.apache.org/jira/browse/FLINK-31856
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.0
>Reporter: Michael Hempel-Jørgensen
>Assignee: Andriy Redko
>Priority: Minor
>  Labels: pull-request-available
>
> It is not currently possible to customise the Opensearch REST client in all 
> of the connectors.
> We are currently using the using the OpensearchSink in 
> [connector/opensearch/sink/OpensearchSink.java|https://github.com/apache/flink-connector-opensearch/blob/main/flink-connector-opensearch/src/main/java/org/apache/flink/connector/opensearch/sink/OpensearchSink.java]
>  and need to be able to authenticate/authorise with Opensearch using OAuth2 
> and therefore need to be able to pass a bearer token to the bulk index calls.
> The access token will expire and change during the jobs lifetime it must be 
> possible to handle this, i.e. giving a token at when building the sink is not 
> enough.
> For reference see the mailing list discussion: 
> https://lists.apache.org/thread/9rvwhzjwzm6yq9mg481sdxqx9nqr1x5g



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32209) Opensearch connector should remove the dependency on flink-shaded

2023-05-26 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko resolved FLINK-32209.
--
Resolution: Won't Do

> Opensearch connector should remove the dependency on flink-shaded
> -
>
> Key: FLINK-32209
> URL: https://issues.apache.org/jira/browse/FLINK-32209
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.1
>Reporter: Andriy Redko
>Assignee: Andriy Redko
>Priority: Major
> Fix For: opensearch-1.1.0
>
>
> The Opensearch connector depends on flink-shaded. With the externalization of 
> the connector, the connectors shouldn't rely on Flink-Shaded



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32209) Opensearch connector should remove the dependency on flink-shaded

2023-05-26 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726691#comment-17726691
 ] 

Andriy Redko commented on FLINK-32209:
--

Already fixed by 
https://github.com/apache/flink-connector-opensearch/commit/85e9cad4f09519543e149530f8a61b2635ca506e

> Opensearch connector should remove the dependency on flink-shaded
> -
>
> Key: FLINK-32209
> URL: https://issues.apache.org/jira/browse/FLINK-32209
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.1
>Reporter: Andriy Redko
>Assignee: Andriy Redko
>Priority: Major
> Fix For: opensearch-1.1.0
>
>
> The Opensearch connector depends on flink-shaded. With the externalization of 
> the connector, the connectors shouldn't rely on Flink-Shaded



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32209) Opensearch connector should remove the dependency on flink-shaded

2023-05-26 Thread Andriy Redko (Jira)
Andriy Redko created FLINK-32209:


 Summary: Opensearch connector should remove the dependency on 
flink-shaded
 Key: FLINK-32209
 URL: https://issues.apache.org/jira/browse/FLINK-32209
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Opensearch
Affects Versions: opensearch-1.0.1
Reporter: Andriy Redko
 Fix For: opensearch-1.1.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32209) Opensearch connector should remove the dependency on flink-shaded

2023-05-26 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-32209:
-
Description: The Opensearch connector depends on flink-shaded. With the 
externalization of the connector, the connectors shouldn't rely on Flink-Shaded

> Opensearch connector should remove the dependency on flink-shaded
> -
>
> Key: FLINK-32209
> URL: https://issues.apache.org/jira/browse/FLINK-32209
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.1
>Reporter: Andriy Redko
>Priority: Major
> Fix For: opensearch-1.1.0
>
>
> The Opensearch connector depends on flink-shaded. With the externalization of 
> the connector, the connectors shouldn't rely on Flink-Shaded



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31856) Add support for Opensearch Connector REST client customization

2023-04-20 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17714614#comment-17714614
 ] 

Andriy Redko commented on FLINK-31856:
--

[~martijnvisser] please feel free to assign it to me, thank you

> Add support for Opensearch Connector REST client customization
> --
>
> Key: FLINK-31856
> URL: https://issues.apache.org/jira/browse/FLINK-31856
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.0
>Reporter: Michael Hempel-Jørgensen
>Priority: Minor
>
> It is not currently possible to customise the Opensearch REST client in all 
> of the connectors.
> We are currently using the using the OpensearchSink in 
> [connector/opensearch/sink/OpensearchSink.java|https://github.com/apache/flink-connector-opensearch/blob/main/flink-connector-opensearch/src/main/java/org/apache/flink/connector/opensearch/sink/OpensearchSink.java]
>  and need to be able to authenticate/authorise with Opensearch using OAuth2 
> and therefore need to be able to pass a bearer token to the bulk index calls.
> The access token will expire and change during the jobs lifetime it must be 
> possible to handle this, i.e. giving a token at when building the sink is not 
> enough.
> For reference see the mailing list discussion: 
> https://lists.apache.org/thread/9rvwhzjwzm6yq9mg481sdxqx9nqr1x5g



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31068) Document how to use Opensearch connector with OpenSearch 1.x / 2.x / 3.x (upcoming) clusters

2023-02-14 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-31068:
-
Summary: Document how to use Opensearch connector with OpenSearch 1.x / 2.x 
/ 3.x (upcoming) clusters  (was: Document how to use Opensearch connector with 
OpenSearch 1.x / 2.x / ... clusters)

> Document how to use Opensearch connector with OpenSearch 1.x / 2.x / 3.x 
> (upcoming) clusters
> 
>
> Key: FLINK-31068
> URL: https://issues.apache.org/jira/browse/FLINK-31068
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.0
>Reporter: Andriy Redko
>Priority: Major
>
> By default, Opensearch connector uses the OpenSearch client from 1.3.x 
> release line (due to Apache Flink's JDK-8 baseline requirement). However, 
> Flink Opensearch Connector is fully compatible with 1.x / 2.x / 3.x 
> (upcoming) release lines. The documentation does not describe the way how to 
> switch off the Opensearch client libraries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31068) Document how to use Opensearch connector with OpenSearch 1.x / 2.x / ... clusters

2023-02-14 Thread Andriy Redko (Jira)
Andriy Redko created FLINK-31068:


 Summary: Document how to use Opensearch connector with OpenSearch 
1.x / 2.x / ... clusters
 Key: FLINK-31068
 URL: https://issues.apache.org/jira/browse/FLINK-31068
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Opensearch
Affects Versions: opensearch-1.0.0
Reporter: Andriy Redko


By default, Opensearch connector uses the OpenSearch client from 1.3.x release 
line (due to Apache Flink's JDK-8 baseline requirement). However, Flink 
Opensearch Connector is fully compatible with 1.x / 2.x / 3.x (upcoming) 
release lines. The documentation does not describe the way how to switch off 
the Opensearch client libraries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-30998) Add optional exception handler to flink-connector-opensearch

2023-02-13 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17687966#comment-17687966
 ] 

Andriy Redko edited comment on FLINK-30998 at 2/13/23 2:41 PM:
---

Hi [~lilyevsky] 

Ah, the dependabot pull request has to be closed (I sadly cannot do that) 
[~martijnvisser] could please help here?

> I know that the connector built from main branch will work with 2.x cluster, 
> but only if in my project I explicitly upgrade opensearch version to 2.5.0 . 
> Again, this is not a problem at all. Maybe you just should mention this fact 
> in README.

This is a good idea, I will update the documentation & README.md, thank you.

> Without setting opensearch version to 2.5.0 it failed on parsing some 
> responses from the cluster, complaining about one missing field.

Sorry, I didn't get the context for this one. You mean using 1.x client (the 
connector's default) with OpenSearch 2.x cluster, is that right?

> Also, a minor technical difficulty with unit tests. As I mentioned, I had to 
> do some small fixes there to make it compile with opensearch 2.5.0.

That is correct, it has been discussed here 
[https://github.com/apache/flink-connector-opensearch/pull/4#issuecomment-1370984907|https://github.com/apache/flink-connector-opensearch/pull/4#issuecomment-1370984907,]
 , the dependency of unit tests should not impact the connector


was (Author: reta):
Hi [~lilyevsky] 

Ah, the dependabot pull request has to be close (I sadly cannot do that) 
[~martijnvisser] could please help here?

> I know that the connector built from main branch will work with 2.x cluster, 
> but only if in my project I explicitly upgrade opensearch version to 2.5.0 . 
> Again, this is not a problem at all. Maybe you just should mention this fact 
> in README.

This is a good idea, I will update the documentation & README.md, thank you.

> Without setting opensearch version to 2.5.0 it failed on parsing some 
> responses from the cluster, complaining about one missing field.

Sorry, I didn't get the context for this one. You mean using 1.x client (the 
connector's default) with OpenSearch 2.x cluster, is that right?

> Also, a minor technical difficulty with unit tests. As I mentioned, I had to 
> do some small fixes there to make it compile with opensearch 2.5.0.

That is correct, it has been discussed here 
[https://github.com/apache/flink-connector-opensearch/pull/4#issuecomment-1370984907|https://github.com/apache/flink-connector-opensearch/pull/4#issuecomment-1370984907,]
 , the dependency of unit tests should not impact the connector

> Add optional exception handler to flink-connector-opensearch
> 
>
> Key: FLINK-30998
> URL: https://issues.apache.org/jira/browse/FLINK-30998
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: 1.16.1
>Reporter: Leonid Ilyevsky
>Priority: Major
>
> Currently, when there is a failure coming from Opensearch, the 
> FlinkRuntimeException is thrown from OpensearchWriter.java code (line 346). 
> This makes the Flink pipeline fail. There is no way to handle the exception 
> in the client code.
> I suggest to add an option to set a failure handler, similar to the way it is 
> done in elasticsearch connector. This way the client code has a chance to 
> examine the failure and handle it.
> Here is the use case example when it will be very useful. We are using 
> streams on Opensearch side, and we are setting our own document IDs. 
> Sometimes these IDs are duplicated; we need to ignore this situation and 
> continue (this way it works for us with Elastisearch).
> However, with opensearch connector, the error comes back, saying that the 
> batch failed (even though most of the documents were indexed, only the ones 
> with duplicated IDs were rejected), and the whole flink job fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30998) Add optional exception handler to flink-connector-opensearch

2023-02-13 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17687966#comment-17687966
 ] 

Andriy Redko commented on FLINK-30998:
--

Hi [~lilyevsky] 

Ah, the dependabot pull request has to be close (I sadly cannot do that) 
[~martijnvisser] could please help here?

> I know that the connector built from main branch will work with 2.x cluster, 
> but only if in my project I explicitly upgrade opensearch version to 2.5.0 . 
> Again, this is not a problem at all. Maybe you just should mention this fact 
> in README.

This is a good idea, I will update the documentation & README.md, thank you.

> Without setting opensearch version to 2.5.0 it failed on parsing some 
> responses from the cluster, complaining about one missing field.

Sorry, I didn't get the context for this one. You mean using 1.x client (the 
connector's default) with OpenSearch 2.x cluster, is that right?

> Also, a minor technical difficulty with unit tests. As I mentioned, I had to 
> do some small fixes there to make it compile with opensearch 2.5.0.

That is correct, it has been discussed here 
[https://github.com/apache/flink-connector-opensearch/pull/4#issuecomment-1370984907|https://github.com/apache/flink-connector-opensearch/pull/4#issuecomment-1370984907,]
 , the dependency of unit tests should not impact the connector

> Add optional exception handler to flink-connector-opensearch
> 
>
> Key: FLINK-30998
> URL: https://issues.apache.org/jira/browse/FLINK-30998
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: 1.16.1
>Reporter: Leonid Ilyevsky
>Priority: Major
>
> Currently, when there is a failure coming from Opensearch, the 
> FlinkRuntimeException is thrown from OpensearchWriter.java code (line 346). 
> This makes the Flink pipeline fail. There is no way to handle the exception 
> in the client code.
> I suggest to add an option to set a failure handler, similar to the way it is 
> done in elasticsearch connector. This way the client code has a chance to 
> examine the failure and handle it.
> Here is the use case example when it will be very useful. We are using 
> streams on Opensearch side, and we are setting our own document IDs. 
> Sometimes these IDs are duplicated; we need to ignore this situation and 
> continue (this way it works for us with Elastisearch).
> However, with opensearch connector, the error comes back, saying that the 
> batch failed (even though most of the documents were indexed, only the ones 
> with duplicated IDs were rejected), and the whole flink job fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30998) Add optional exception handler to flink-connector-opensearch

2023-02-13 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17687932#comment-17687932
 ] 

Andriy Redko commented on FLINK-30998:
--

Thanks [~lilyevsky] , looking into it

 

> Also I noticed on the branch you still have Flink version as 1.16.0, while in 
> main it is 1.16.1, so probably you are gong to correct that.

Hm ... this should not be the case 
[https://github.com/apache/flink-connector-opensearch/commit/17f5fcafdb393b0b460cbe5e56906e24576221f1]
 , could you please point out where you still see 1.16.0?

 

> Also question: are you going to maintain two variants of this connector? One 
> for Opensearch 1.3.0 and another for 2.5.0? 

The OpenSearch connector should work with 1.x and 2.x clusters. The 1.x is the 
baseline since 2.x clients need JDK-11 at least, Apache Flink has JDK-8 
baseline. 

> Add optional exception handler to flink-connector-opensearch
> 
>
> Key: FLINK-30998
> URL: https://issues.apache.org/jira/browse/FLINK-30998
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: 1.16.1
>Reporter: Leonid Ilyevsky
>Priority: Major
>
> Currently, when there is a failure coming from Opensearch, the 
> FlinkRuntimeException is thrown from OpensearchWriter.java code (line 346). 
> This makes the Flink pipeline fail. There is no way to handle the exception 
> in the client code.
> I suggest to add an option to set a failure handler, similar to the way it is 
> done in elasticsearch connector. This way the client code has a chance to 
> examine the failure and handle it.
> Here is the use case example when it will be very useful. We are using 
> streams on Opensearch side, and we are setting our own document IDs. 
> Sometimes these IDs are duplicated; we need to ignore this situation and 
> continue (this way it works for us with Elastisearch).
> However, with opensearch connector, the error comes back, saying that the 
> batch failed (even though most of the documents were indexed, only the ones 
> with duplicated IDs were rejected), and the whole flink job fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30998) Add optional exception handler to flink-connector-opensearch

2023-02-10 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17687346#comment-17687346
 ] 

Andriy Redko commented on FLINK-30998:
--

[~lilyevsky] thanks, you could send the pull request to 
[https://github.com/apache/flink-connector-opensearch/] and we could work it 
through. Any other options are more comfortable for you? Thanks!

> Add optional exception handler to flink-connector-opensearch
> 
>
> Key: FLINK-30998
> URL: https://issues.apache.org/jira/browse/FLINK-30998
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: 1.16.1
>Reporter: Leonid Ilyevsky
>Priority: Major
>
> Currently, when there is a failure coming from Opensearch, the 
> FlinkRuntimeException is thrown from OpensearchWriter.java code (line 346). 
> This makes the Flink pipeline fail. There is no way to handle the exception 
> in the client code.
> I suggest to add an option to set a failure handler, similar to the way it is 
> done in elasticsearch connector. This way the client code has a chance to 
> examine the failure and handle it.
> Here is the use case example when it will be very useful. We are using 
> streams on Opensearch side, and we are setting our own document IDs. 
> Sometimes these IDs are duplicated; we need to ignore this situation and 
> continue (this way it works for us with Elastisearch).
> However, with opensearch connector, the error comes back, saying that the 
> batch failed (even though most of the documents were indexed, only the ones 
> with duplicated IDs were rejected), and the whole flink job fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30998) Add optional exception handler to flink-connector-opensearch

2023-02-10 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17687289#comment-17687289
 ] 

Andriy Redko commented on FLINK-30998:
--

[~martijnvisser] [~lilyevsky] sure, happy to help here

> Add optional exception handler to flink-connector-opensearch
> 
>
> Key: FLINK-30998
> URL: https://issues.apache.org/jira/browse/FLINK-30998
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: 1.16.1
>Reporter: Leonid Ilyevsky
>Priority: Major
>
> Currently, when there is a failure coming from Opensearch, the 
> FlinkRuntimeException is thrown from OpensearchWriter.java code (line 346). 
> This makes the Flink pipeline fail. There is no way to handle the exception 
> in the client code.
> I suggest to add an option to set a failure handler, similar to the way it is 
> done in elasticsearch connector. This way the client code has a chance to 
> examine the failure and handle it.
> Here is the use case example when it will be very useful. We are using 
> streams on Opensearch side, and we are setting our own document IDs. 
> Sometimes these IDs are duplicated; we need to ignore this situation and 
> continue (this way it works for us with Elastisearch).
> However, with opensearch connector, the error comes back, saying that the 
> batch failed (even though most of the documents were indexed, only the ones 
> with duplicated IDs were rejected), and the whole flink job fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-30718) Cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter

2023-01-31 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko resolved FLINK-30718.
--
Resolution: Won't Fix

> Cannot assign instance of java.lang.invoke.SerializedLambda to field 
> org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter 
> ---
>
> Key: FLINK-30718
> URL: https://issues.apache.org/jira/browse/FLINK-30718
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 1.16.0
>Reporter: Andriy Redko
>Priority: Major
> Attachments: example-opensearch.zip
>
>
> When using OpenSearchSink (Apache Flink OpenSearch Connector 1.0.0) 
> programmatically
> {noformat}
>  
> final StreamExecutionEnvironment env = StreamExecutionEnvironment
> .createRemoteEnvironment("localhost", 8081);
> final Collection> users = new 
> ArrayList<>();
> users.add(Tuple4.of("u1", "admin", 100L, 200L));
> final DataStream> source = 
> env.fromCollection(users);
> final OpensearchSink> sink =
> new OpensearchSinkBuilder>()
> .setHosts(new HttpHost("localhost", 9200, "https"))
> .setEmitter( (element, ctx, indexer) -> {
> indexer.add(
> Requests
> .indexRequest()
> .index("users")
> .id(element.f0)
> .source(Map.ofEntries(
> Map.entry("user_id", element.f0),
> Map.entry("user_name", element.f1),
> Map.entry("uv", element.f2),
> Map.entry("pv", element.f3)
> )));
> })
> .setConnectionUsername("admin")
> .setConnectionPassword("admin")
> .setAllowInsecure(true)
> .setBulkFlushMaxActions(1)
> .build();
> source.sinkTo(sink);
> env.execute("Opensearch end to end sink test example");
> {noformat}
> the stream processing fails with the exception
> {noformat}
> Caused by: org.apache.flink.streaming.runtime.tasks.StreamTaskException: 
> Cannot instantiate user function.
>   at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:399)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:162)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.(RegularOperatorChain.java:60)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:681)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.lang.ClassCastException: cannot assign instance of 
> java.lang.invoke.SerializedLambda to field 
> org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter of type 
> org.apache.flink.connector.opensearch.sink.OpensearchEmitter in instance of 
> org.apache.flink.connector.opensearch.sink.OpensearchSink
>   at 
> java.base/java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2076)
>   at 
> java.base/java.io.ObjectStreamClass$FieldReflector.checkObjectFieldValueTypes(ObjectStreamClass.java:2039)
>   at 
> java.base/java.io.ObjectStreamClass.checkObjFieldValueTypes(ObjectStreamClass.java:1293)
>   at 
> java.base/java.io.ObjectInputStream.defaultCheckFieldValues(ObjectInputStream.java:2512)
>   at 
> java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2419)
>   at 
> java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
>   at 
> java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
>   at 
> java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
>   at 
> java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390)
>   at 
> java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
>   at 
> java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
>   at 
> 

[jira] [Commented] (FLINK-30718) Cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter

2023-01-31 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17682691#comment-17682691
 ] 

Andriy Redko commented on FLINK-30718:
--

The issue is caused by Java lambda serialization mechanism: the classpath (on a 
Flink side) should contain the emitter implementation, otherwise the 
`SerializedLambda` would never be reified.

> Cannot assign instance of java.lang.invoke.SerializedLambda to field 
> org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter 
> ---
>
> Key: FLINK-30718
> URL: https://issues.apache.org/jira/browse/FLINK-30718
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 1.16.0
>Reporter: Andriy Redko
>Priority: Major
> Attachments: example-opensearch.zip
>
>
> When using OpenSearchSink (Apache Flink OpenSearch Connector 1.0.0) 
> programmatically
> {noformat}
>  
> final StreamExecutionEnvironment env = StreamExecutionEnvironment
> .createRemoteEnvironment("localhost", 8081);
> final Collection> users = new 
> ArrayList<>();
> users.add(Tuple4.of("u1", "admin", 100L, 200L));
> final DataStream> source = 
> env.fromCollection(users);
> final OpensearchSink> sink =
> new OpensearchSinkBuilder>()
> .setHosts(new HttpHost("localhost", 9200, "https"))
> .setEmitter( (element, ctx, indexer) -> {
> indexer.add(
> Requests
> .indexRequest()
> .index("users")
> .id(element.f0)
> .source(Map.ofEntries(
> Map.entry("user_id", element.f0),
> Map.entry("user_name", element.f1),
> Map.entry("uv", element.f2),
> Map.entry("pv", element.f3)
> )));
> })
> .setConnectionUsername("admin")
> .setConnectionPassword("admin")
> .setAllowInsecure(true)
> .setBulkFlushMaxActions(1)
> .build();
> source.sinkTo(sink);
> env.execute("Opensearch end to end sink test example");
> {noformat}
> the stream processing fails with the exception
> {noformat}
> Caused by: org.apache.flink.streaming.runtime.tasks.StreamTaskException: 
> Cannot instantiate user function.
>   at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:399)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:162)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.(RegularOperatorChain.java:60)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:681)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.lang.ClassCastException: cannot assign instance of 
> java.lang.invoke.SerializedLambda to field 
> org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter of type 
> org.apache.flink.connector.opensearch.sink.OpensearchEmitter in instance of 
> org.apache.flink.connector.opensearch.sink.OpensearchSink
>   at 
> java.base/java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2076)
>   at 
> java.base/java.io.ObjectStreamClass$FieldReflector.checkObjectFieldValueTypes(ObjectStreamClass.java:2039)
>   at 
> java.base/java.io.ObjectStreamClass.checkObjFieldValueTypes(ObjectStreamClass.java:1293)
>   at 
> java.base/java.io.ObjectInputStream.defaultCheckFieldValues(ObjectInputStream.java:2512)
>   at 
> java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2419)
>   at 
> java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
>   at 
> java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
>   at 
> java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
>   at 
> java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390)
>   at 
> 

[jira] [Updated] (FLINK-30718) Cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter

2023-01-17 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-30718:
-
Attachment: example-opensearch.zip

> Cannot assign instance of java.lang.invoke.SerializedLambda to field 
> org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter 
> ---
>
> Key: FLINK-30718
> URL: https://issues.apache.org/jira/browse/FLINK-30718
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 1.16.0
>Reporter: Andriy Redko
>Priority: Major
> Attachments: example-opensearch.zip
>
>
> When using OpenSearchSink (Apache Flink OpenSearch Connector 1.0.0) 
> programmatically
> {noformat}
>  
> final StreamExecutionEnvironment env = StreamExecutionEnvironment
> .createRemoteEnvironment("localhost", 8081);
> final Collection> users = new 
> ArrayList<>();
> users.add(Tuple4.of("u1", "admin", 100L, 200L));
> final DataStream> source = 
> env.fromCollection(users);
> final OpensearchSink> sink =
> new OpensearchSinkBuilder>()
> .setHosts(new HttpHost("localhost", 9200, "https"))
> .setEmitter( (element, ctx, indexer) -> {
> indexer.add(
> Requests
> .indexRequest()
> .index("users")
> .id(element.f0)
> .source(Map.ofEntries(
> Map.entry("user_id", element.f0),
> Map.entry("user_name", element.f1),
> Map.entry("uv", element.f2),
> Map.entry("pv", element.f3)
> )));
> })
> .setConnectionUsername("admin")
> .setConnectionPassword("admin")
> .setAllowInsecure(true)
> .setBulkFlushMaxActions(1)
> .build();
> source.sinkTo(sink);
> env.execute("Opensearch end to end sink test example");
> {noformat}
> the stream processing fails with the exception
> {noformat}
> Caused by: org.apache.flink.streaming.runtime.tasks.StreamTaskException: 
> Cannot instantiate user function.
>   at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:399)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:162)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.(RegularOperatorChain.java:60)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:681)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.lang.ClassCastException: cannot assign instance of 
> java.lang.invoke.SerializedLambda to field 
> org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter of type 
> org.apache.flink.connector.opensearch.sink.OpensearchEmitter in instance of 
> org.apache.flink.connector.opensearch.sink.OpensearchSink
>   at 
> java.base/java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2076)
>   at 
> java.base/java.io.ObjectStreamClass$FieldReflector.checkObjectFieldValueTypes(ObjectStreamClass.java:2039)
>   at 
> java.base/java.io.ObjectStreamClass.checkObjFieldValueTypes(ObjectStreamClass.java:1293)
>   at 
> java.base/java.io.ObjectInputStream.defaultCheckFieldValues(ObjectInputStream.java:2512)
>   at 
> java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2419)
>   at 
> java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
>   at 
> java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
>   at 
> java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
>   at 
> java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390)
>   at 
> java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
>   at 
> java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
>   at 
> 

[jira] [Updated] (FLINK-30718) Cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter

2023-01-17 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-30718:
-
Description: 
When using OpenSearchSink (Apache Flink OpenSearch Connector 1.0.0) 
programmatically
{noformat}
 
final StreamExecutionEnvironment env = StreamExecutionEnvironment
.createRemoteEnvironment("localhost", 8081);

final Collection> users = new 
ArrayList<>();
users.add(Tuple4.of("u1", "admin", 100L, 200L));

final DataStream> source = 
env.fromCollection(users);
final OpensearchSink> sink =
new OpensearchSinkBuilder>()
.setHosts(new HttpHost("localhost", 9200, "https"))
.setEmitter( (element, ctx, indexer) -> {
indexer.add(
Requests
.indexRequest()
.index("users")
.id(element.f0)
.source(Map.ofEntries(
Map.entry("user_id", element.f0),
Map.entry("user_name", element.f1),
Map.entry("uv", element.f2),
Map.entry("pv", element.f3)
)));
})
.setConnectionUsername("admin")
.setConnectionPassword("admin")
.setAllowInsecure(true)
.setBulkFlushMaxActions(1)
.build();

source.sinkTo(sink);
env.execute("Opensearch end to end sink test example");

{noformat}
the stream processing fails with the exception
{noformat}
Caused by: org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot 
instantiate user function.
at 
org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:399)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:162)
at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.(RegularOperatorChain.java:60)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:681)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669)
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassCastException: cannot assign instance of 
java.lang.invoke.SerializedLambda to field 
org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter of type 
org.apache.flink.connector.opensearch.sink.OpensearchEmitter in instance of 
org.apache.flink.connector.opensearch.sink.OpensearchSink
at 
java.base/java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2076)
at 
java.base/java.io.ObjectStreamClass$FieldReflector.checkObjectFieldValueTypes(ObjectStreamClass.java:2039)
at 
java.base/java.io.ObjectStreamClass.checkObjFieldValueTypes(ObjectStreamClass.java:1293)
at 
java.base/java.io.ObjectInputStream.defaultCheckFieldValues(ObjectInputStream.java:2512)
at 
java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2419)
at 
java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
at 
java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
at 
java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
at 
java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390)
at 
java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
at 
java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
at 
java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:489)
at 
java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:447)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:617)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:602)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:589)
at 
org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:543)
at 
org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:383)
... 9 more


 {noformat}
Reproducer project attached.

  was:
When using OpenSearchSink programmatically
{noformat}
 
final 

[jira] [Updated] (FLINK-30718) Cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter

2023-01-17 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-30718:
-
Description: 
When using OpenSearchSink programmatically
{noformat}
 
final StreamExecutionEnvironment env = StreamExecutionEnvironment
.createRemoteEnvironment("localhost", 8081);

final Collection> users = new 
ArrayList<>();
users.add(Tuple4.of("u1", "admin", 100L, 200L));

final DataStream> source = 
env.fromCollection(users);
final OpensearchSink> sink =
new OpensearchSinkBuilder>()
.setHosts(new HttpHost("localhost", 9200, "https"))
.setEmitter( (element, ctx, indexer) -> {
indexer.add(
Requests
.indexRequest()
.index("users")
.id(element.f0)
.source(Map.ofEntries(
Map.entry("user_id", element.f0),
Map.entry("user_name", element.f1),
Map.entry("uv", element.f2),
Map.entry("pv", element.f3)
)));
})
.setConnectionUsername("admin")
.setConnectionPassword("admin")
.setAllowInsecure(true)
.setBulkFlushMaxActions(1)
.build();

source.sinkTo(sink);
env.execute("Opensearch end to end sink test example");

{noformat}
the stream processing fails with the exception{color:#00}
{color}
{noformat}
Caused by: org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot 
instantiate user function.
at 
org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:399)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:162)
at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.(RegularOperatorChain.java:60)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:681)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669)
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassCastException: cannot assign instance of 
java.lang.invoke.SerializedLambda to field 
org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter of type 
org.apache.flink.connector.opensearch.sink.OpensearchEmitter in instance of 
org.apache.flink.connector.opensearch.sink.OpensearchSink
at 
java.base/java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2076)
at 
java.base/java.io.ObjectStreamClass$FieldReflector.checkObjectFieldValueTypes(ObjectStreamClass.java:2039)
at 
java.base/java.io.ObjectStreamClass.checkObjFieldValueTypes(ObjectStreamClass.java:1293)
at 
java.base/java.io.ObjectInputStream.defaultCheckFieldValues(ObjectInputStream.java:2512)
at 
java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2419)
at 
java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
at 
java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
at 
java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
at 
java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390)
at 
java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
at 
java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
at 
java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:489)
at 
java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:447)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:617)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:602)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:589)
at 
org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:543)
at 
org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:383)
... 9 more


 {noformat}

  was:
When using OpenSearchSink programmatically
{noformat}
 
final StreamExecutionEnvironment env = StreamExecutionEnvironment

[jira] [Created] (FLINK-30718) Cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter

2023-01-17 Thread Andriy Redko (Jira)
Andriy Redko created FLINK-30718:


 Summary: Cannot assign instance of 
java.lang.invoke.SerializedLambda to field 
org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter 
 Key: FLINK-30718
 URL: https://issues.apache.org/jira/browse/FLINK-30718
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Opensearch
Affects Versions: 1.16.0
Reporter: Andriy Redko


When using OpenSearchSink programmatically
{noformat}
 
final StreamExecutionEnvironment env = StreamExecutionEnvironment
.createRemoteEnvironment("localhost", 8081);

final Collection> users = new 
ArrayList<>();
users.add(Tuple4.of("u1", "admin", 100L, 200L));

final DataStream> source = 
env.fromCollection(users);
final OpensearchSink> sink =
new OpensearchSinkBuilder>()
.setHosts(new HttpHost("localhost", 9200, "https"))
.setEmitter( (element, ctx, indexer) -> {
indexer.add(
Requests
.indexRequest()
.index("users")
.id(element.f0)
.source(Map.ofEntries(
Map.entry("user_id", element.f0),
Map.entry("user_name", element.f1),
Map.entry("uv", element.f2),
Map.entry("pv", element.f3)
)));
})
.setConnectionUsername("admin")
.setConnectionPassword("admin")
.setAllowInsecure(true)
.setBulkFlushMaxActions(1)
.build();

source.sinkTo(sink);
env.execute("Opensearch end to end sink test example");

{noformat}
the stream processing fails with the exception{color:#00}

{color}
{noformat}
Caused by: org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot 
instantiate user function.
at 
org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:399)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:162)
at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.(RegularOperatorChain.java:60)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:681)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669)
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassCastException: cannot assign instance of 
java.lang.invoke.SerializedLambda to field 
org.apache.flink.connector.opensearch.sink.OpensearchSink.emitter of type 
org.apache.flink.connector.opensearch.sink.OpensearchEmitter in instance of 
org.apache.flink.connector.opensearch.sink.OpensearchSink
at 
java.base/java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2076)
at 
java.base/java.io.ObjectStreamClass$FieldReflector.checkObjectFieldValueTypes(ObjectStreamClass.java:2039)
at 
java.base/java.io.ObjectStreamClass.checkObjFieldValueTypes(ObjectStreamClass.java:1293)
at 
java.base/java.io.ObjectInputStream.defaultCheckFieldValues(ObjectInputStream.java:2512)
at 
java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2419)
at 
java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
at 
java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
at 
java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2496)
at 
java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2390)
at 
java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2228)
at 
java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1687)
at 
java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:489)
at 
java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:447)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:617)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:602)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:589)
at 
org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:543)
at 

[jira] [Commented] (FLINK-30537) Add support for OpenSearch 2.3

2023-01-01 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17653450#comment-17653450
 ] 

Andriy Redko commented on FLINK-30537:
--

The current Opensearch connector supports both 1.3.x and 2.x (with 1.3.x being 
the default client), the 2.x requires JDK-11 baseline.

> Add support for OpenSearch 2.3
> --
>
> Key: FLINK-30537
> URL: https://issues.apache.org/jira/browse/FLINK-30537
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Reporter: Martijn Visser
>Priority: Major
>
> Create a version for Flink’s Opensearch connector that supports version 2.3.
> From the ASF Flink Slack: 
> https://apache-flink.slack.com/archives/C03GV7L3G2C/p1672339157102319



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30526) Handle failures in OpenSearch with ActionRequestFailureHandler being deprecated

2022-12-28 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17652525#comment-17652525
 ] 

Andriy Redko commented on FLINK-30526:
--

The deprecations are related to 
https://issues.apache.org/jira/browse/FLINK-24323 and basically hint towards 
using 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-143%3A+Unified+Sink+API]
 instead, hope it helps.

> Handle failures in OpenSearch with ActionRequestFailureHandler being 
> deprecated
> ---
>
> Key: FLINK-30526
> URL: https://issues.apache.org/jira/browse/FLINK-30526
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Reporter: Martijn Visser
>Priority: Major
>
> {quote} Hi everyone,
> I have a streaming application that has Elasticsearch sink.
> I Upgraded flink version from 1.11 to 1.16 and also moved from ES 7 to 
> OpenSearch 2.0, and now I'm facing some deprected issues, hope you can help 
> me.
> In the previous version I created ElasticsearchSink and added a failure 
> handler, which protected the sink to not fail on some exceptions.
>  final ActionRequestFailureHandler failureHandler = (action, failure, 
> restStatusCode, indexer) -> {
> if (ExceptionUtils.findThrowable(failure, 
> EsRejectedExecutionException.class).isPresent()) {
> indexer.add(action);
> } else if (ExceptionUtils.findThrowable(failure, 
> ElasticsearchParseException.class).isPresent()) {
> log.warn("Got malformed document , action {}", action);
> // malformed document; simply drop elasticsearchSinkFunction 
> without failing sink
> } else if (failure instanceof IOException && failure.getCause() 
> instanceof NullPointerException && failure.getMessage().contains("Unable to 
> parse response body")) {
> //issue with ES 7 and opensearch - that does not send type - 
> while response is waiting for it
> //at 
> org.elasticsearch.action.DocWriteResponse.(DocWriteResponse.java:127) 
> -- this.type = Objects.requireNonNull(type);
> log.debug("known issue format the response for ES 7.5.1 and 
> DB OS (opensearch) :{}", failure.getMessage());
> } else {
> // for all other failures, log and don't fail the sink
> log.error("Got error while trying to perform ES action {}", 
> action, failure);
> }
> };
>   
>  final ElasticsearchSink.Builder builder = new 
> ElasticsearchSink.Builder<>(transportNodes, elasticsearchSinkFunction);
> In the new version the class ActionRequestFailureHandler is deprecated and 
> after investigation I can't find any way to handle failures.
> For all failures the sink fails.
> Is there anything I didn't see?
> Thanks is advance! 
> {quote}
> From the Apache Flink Slack channel 
> https://apache-flink.slack.com/archives/C03G7LJTS2G/p1672122873318899



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30488) [Connectors] [Opensearch] OpenSearch implementation of Async Sink

2022-12-22 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-30488:
-
Summary: [Connectors] [Opensearch] OpenSearch implementation of Async Sink  
(was: OpenSearch implementation of Async Sink)

> [Connectors] [Opensearch] OpenSearch implementation of Async Sink
> -
>
> Key: FLINK-30488
> URL: https://issues.apache.org/jira/browse/FLINK-30488
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.0
>Reporter: Andriy Redko
>Priority: Major
>
> The current OpenSearch connector only uses the SinkFunction. 
>  * Implement an asynchronous sink (Async Sink) support for OpenSearch 
> connector
>  * Update documentation
>  * Add end to end tests
> More details to be found 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30488) OpenSearch implementation of Async Sink

2022-12-22 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-30488:
-
Summary: OpenSearch implementation of Async Sink  (was: [Connectors] 
[Opensearch] OpenSearch implementation of Async Sink)

> OpenSearch implementation of Async Sink
> ---
>
> Key: FLINK-30488
> URL: https://issues.apache.org/jira/browse/FLINK-30488
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.0
>Reporter: Andriy Redko
>Priority: Major
>
> The current OpenSearch connector only uses the SinkFunction. 
>  * Implement an asynchronous sink (Async Sink) support for OpenSearch 
> connector
>  * Update documentation
>  * Add end to end tests
> More details to be found 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30488) OpenSearch implementation of Async Sink

2022-12-22 Thread Andriy Redko (Jira)
Andriy Redko created FLINK-30488:


 Summary: OpenSearch implementation of Async Sink
 Key: FLINK-30488
 URL: https://issues.apache.org/jira/browse/FLINK-30488
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Opensearch
Affects Versions: opensearch-1.0.0
Reporter: Andriy Redko


* Implement an asynchronous sink (Async Sink) support for OpenSearch connector
 * Update documentation
 * Add end to end tests

h2. References

More details to be found 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30488) OpenSearch implementation of Async Sink

2022-12-22 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-30488:
-
Description: 
The current OpenSearch connector only uses the SinkFunction. 
 * Implement an asynchronous sink (Async Sink) support for OpenSearch connector
 * Update documentation
 * Add end to end tests

More details to be found 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]

  was:
* Implement an asynchronous sink (Async Sink) support for OpenSearch connector
 * Update documentation
 * Add end to end tests

More details to be found 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]


> OpenSearch implementation of Async Sink
> ---
>
> Key: FLINK-30488
> URL: https://issues.apache.org/jira/browse/FLINK-30488
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.0
>Reporter: Andriy Redko
>Priority: Major
>
> The current OpenSearch connector only uses the SinkFunction. 
>  * Implement an asynchronous sink (Async Sink) support for OpenSearch 
> connector
>  * Update documentation
>  * Add end to end tests
> More details to be found 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30488) OpenSearch implementation of Async Sink

2022-12-22 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-30488:
-
Description: 
* Implement an asynchronous sink (Async Sink) support for OpenSearch connector
 * Update documentation
 * Add end to end tests

More details to be found 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]

  was:
* Implement an asynchronous sink (Async Sink) support for OpenSearch connector
 * Update documentation
 * Add end to end tests

h2. References

More details to be found 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]


> OpenSearch implementation of Async Sink
> ---
>
> Key: FLINK-30488
> URL: https://issues.apache.org/jira/browse/FLINK-30488
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.0
>Reporter: Andriy Redko
>Priority: Major
>
> * Implement an asynchronous sink (Async Sink) support for OpenSearch connector
>  * Update documentation
>  * Add end to end tests
> More details to be found 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30352) [Connectors][Elasticsearch] Document missing configuration properties

2022-12-09 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-30352:
-
Description: 
There is a number of configuration properties which are not documented:
 - sink.delivery-guarantee 
 - connection.request-timeout
 - connection.timeout
 - socket.timeout
 

  was:
There is a number of configuration properties which are not documented:

- sink.delivery-guarantee
- connection.request-timeout
- connection.timeout
- socket.timeout
 


> [Connectors][Elasticsearch] Document missing configuration properties
> -
>
> Key: FLINK-30352
> URL: https://issues.apache.org/jira/browse/FLINK-30352
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.16.0, 1.15.3
>Reporter: Andriy Redko
>Priority: Major
>
> There is a number of configuration properties which are not documented:
>  - sink.delivery-guarantee 
>  - connection.request-timeout
>  - connection.timeout
>  - socket.timeout
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30352) [Connectors][Elasticsearch] Document missing configuration properties

2022-12-09 Thread Andriy Redko (Jira)
Andriy Redko created FLINK-30352:


 Summary: [Connectors][Elasticsearch] Document missing 
configuration properties
 Key: FLINK-30352
 URL: https://issues.apache.org/jira/browse/FLINK-30352
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Affects Versions: 1.15.3, 1.16.0
Reporter: Andriy Redko


There is a number of configuration properties which are not documented:

- sink.delivery-guarantee
- connection.request-timeout
- connection.timeout
- socket.timeout
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30352) [Connectors][Elasticsearch] Document missing configuration properties

2022-12-09 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-30352:
-
Description: 
There is a number of configuration properties which are not documented:
 - sink.delivery-guarantee
 - connection.request-timeout
 - connection.timeout
 - socket.timeout
 

  was:
There is a number of configuration properties which are not documented:
 - sink.delivery-guarantee 
 - connection.request-timeout
 - connection.timeout
 - socket.timeout
 


> [Connectors][Elasticsearch] Document missing configuration properties
> -
>
> Key: FLINK-30352
> URL: https://issues.apache.org/jira/browse/FLINK-30352
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.16.0, 1.15.3
>Reporter: Andriy Redko
>Priority: Major
>
> There is a number of configuration properties which are not documented:
>  - sink.delivery-guarantee
>  - connection.request-timeout
>  - connection.timeout
>  - socket.timeout
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30352) [Connectors][Elasticsearch] Document missing configuration properties

2022-12-09 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-30352:
-
Priority: Minor  (was: Major)

> [Connectors][Elasticsearch] Document missing configuration properties
> -
>
> Key: FLINK-30352
> URL: https://issues.apache.org/jira/browse/FLINK-30352
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.16.0, 1.15.3
>Reporter: Andriy Redko
>Priority: Minor
>
> There is a number of configuration properties which are not documented:
>  - sink.delivery-guarantee
>  - connection.request-timeout
>  - connection.timeout
>  - socket.timeout
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25756) Dedicated Opensearch connectors

2022-11-14 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17634015#comment-17634015
 ] 

Andriy Redko commented on FLINK-25756:
--

Retargeting to https://github.com/apache/flink-connector-opensearch/pull/1

> Dedicated Opensearch connectors
> ---
>
> Key: FLINK-25756
> URL: https://issues.apache.org/jira/browse/FLINK-25756
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Opensearch
>Reporter: Andriy Redko
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: opensearch-1.0.0
>
>
> Since the time Opensearch got forked from Elasticsearch a few things got 
> changed. The projects evolve in different directions, the Elasticsearch 
> clients up to 7.13.x were able to connect to Opensearch clusters, but since 
> 7.14 - not anymore [1] (Elastic continues to harden their clients to connect 
> to Elasticsearch clusters only).
> For example, running current Flink master against Opensearch clusters using 
> Elalsticsearch 7 connectors would fail with:
>  
> {noformat}
>  Caused by: ElasticsearchException[Elasticsearch version 6 or more is 
> required]
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2056)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2043)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:113)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:100)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:133)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:139)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$5.onSuccess(RestHighLevelClient.java:2129)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:636)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:376)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:370)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591){noformat}
>  
> With the compatibility mode [2] turned on, still fails further the line:
> {noformat}
> Caused by: ElasticsearchException[Invalid or missing tagline [The OpenSearch 
> Project: https://opensearch.org/]]
>  at 
> 

[jira] [Created] (FLINK-25961) Remove transport client from Elasticsearch 6/7 connectors (tests only)

2022-02-04 Thread Andriy Redko (Jira)
Andriy Redko created FLINK-25961:


 Summary: Remove transport client from Elasticsearch 6/7 connectors 
(tests only)
 Key: FLINK-25961
 URL: https://issues.apache.org/jira/browse/FLINK-25961
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / ElasticSearch
Reporter: Andriy Redko


The Elasticsearch 6/7 connectors still use deprecated transport client for 
integration tests. This is not really necessary and brings a lot of 
dependencies, the HighLevelRestClient is fully sufficient and could be used as 
drop-in replacement.

The change affects only tests.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25756) Dedicated Opensearch connectors

2022-01-27 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17483220#comment-17483220
 ] 

Andriy Redko commented on FLINK-25756:
--

[~MartijnVisser] the draft pull request is out, 
[https://github.com/apache/flink/pull/18541,] would be terrific to get your 
early feedback on the decisions taken, thanks a mill for offering your help.

 

PS: it is against master but, as you mentioned, if needed will be retargeted to 
dedicated connectors repository

> Dedicated Opensearch connectors
> ---
>
> Key: FLINK-25756
> URL: https://issues.apache.org/jira/browse/FLINK-25756
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Andriy Redko
>Priority: Major
>  Labels: pull-request-available
>
> Since the time Opensearch got forked from Elasticsearch a few things got 
> changed. The projects evolve in different directions, the Elasticsearch 
> clients up to 7.13.x were able to connect to Opensearch clusters, but since 
> 7.14 - not anymore [1] (Elastic continues to harden their clients to connect 
> to Elasticsearch clusters only).
> For example, running current Flink master against Opensearch clusters using 
> Elalsticsearch 7 connectors would fail with:
>  
> {noformat}
>  Caused by: ElasticsearchException[Elasticsearch version 6 or more is 
> required]
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2056)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2043)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:113)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:100)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:133)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:139)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$5.onSuccess(RestHighLevelClient.java:2129)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:636)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:376)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:370)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591){noformat}
>  
> With the compatibility mode [2] turned on, still fails further the line:
> {noformat}
> Caused by: 

[jira] [Commented] (FLINK-25756) Dedicated Opensearch connectors

2022-01-21 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17480243#comment-17480243
 ] 

Andriy Redko commented on FLINK-25756:
--

[~MartijnVisser] awesome, we are working on the pull request right now 
(targeting master but looking forward to retarget to the new connectors home), 
early next week we should have something tangible to setup and test, thank you 
very much for offering help, much appreciate it.

> Dedicated Opensearch connectors
> ---
>
> Key: FLINK-25756
> URL: https://issues.apache.org/jira/browse/FLINK-25756
> Project: Flink
>  Issue Type: Improvement
>Reporter: Andriy Redko
>Priority: Major
>
> Since the time Opensearch got forked from Elasticsearch a few things got 
> changed. The projects evolve in different directions, the Elasticsearch 
> clients up to 7.13.x were able to connect to Opensearch clusters, but since 
> 7.14 - not anymore [1] (Elastic continues to harden their clients to connect 
> to Elasticsearch clusters only).
> For example, running current Flink master against Opensearch clusters using 
> Elalsticsearch 7 connectors would fail with:
>  
> {noformat}
>  Caused by: ElasticsearchException[Elasticsearch version 6 or more is 
> required]
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2056)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2043)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:113)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:100)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:133)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:139)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$5.onSuccess(RestHighLevelClient.java:2129)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:636)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:376)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:370)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591){noformat}
>  
> With the compatibility mode [2] turned on, still fails further the line:
> {noformat}
> Caused by: ElasticsearchException[Invalid or missing tagline [The OpenSearch 
> Project: https://opensearch.org/]]
>  at 
> 

[jira] [Updated] (FLINK-25756) Dedicated Opensearch connectors

2022-01-21 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated FLINK-25756:
-
Description: 
Since the time Opensearch got forked from Elasticsearch a few things got 
changed. The projects evolve in different directions, the Elasticsearch clients 
up to 7.13.x were able to connect to Opensearch clusters, but since 7.14 - not 
anymore [1] (Elastic continues to harden their clients to connect to 
Elasticsearch clusters only).

For example, running current Flink master against Opensearch clusters using 
Elalsticsearch 7 connectors would fail with:

 
{noformat}
 Caused by: ElasticsearchException[Elasticsearch version 6 or more is required]
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2056)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2043)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:113)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:100)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:133)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:139)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$5.onSuccess(RestHighLevelClient.java:2129)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:636)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:376)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:370)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
 at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591){noformat}
 

With the compatibility mode [2] turned on, still fails further the line:
{noformat}
Caused by: ElasticsearchException[Invalid or missing tagline [The OpenSearch 
Project: https://opensearch.org/]]
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2056)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2043)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:113)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:100)
 at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:133)
 at 

[jira] [Created] (FLINK-25756) Dedicated Opensearch connectors

2022-01-21 Thread Andriy Redko (Jira)
Andriy Redko created FLINK-25756:


 Summary: Dedicated Opensearch connectors
 Key: FLINK-25756
 URL: https://issues.apache.org/jira/browse/FLINK-25756
 Project: Flink
  Issue Type: Improvement
Reporter: Andriy Redko


Since the time Opensearch got forked from Elasticsearch a few things got 
changed. The projects evolve in different directions, the Elasticsearch clients 
up to 7.13.x were able to connect to Opensearch clusters, but since 7.14 - not 
anymore [1] (Elastic continues to harden their clients to connect to 
Elasticsearch clusters only).

For example, running current Flink master against Opensearch clusters using 
Elalsticsearch 7 connectors would fail with:

 

```
{color:#008080}Caused{color}{color:#00} 
{color}{color:#ff}by{color}{color:#00}: 
{color}{color:#008080}ElasticsearchException{color}{color:#00}[{color}{color:#008080}Elasticsearch{color}{color:#00}
 version {color}{color:#098658}6{color}{color:#00} 
{color}{color:#ff}or{color}{color:#00} more is required]{color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.{color}{color:#008080}RestHighLevelClient$4{color}{color:#00}.onResponse({color}{color:#008080}RestHighLevelClient{color}{color:#00}.java:{color}{color:#098658}2056{color}{color:#00}){color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.{color}{color:#008080}RestHighLevelClient$4{color}{color:#00}.onResponse({color}{color:#008080}RestHighLevelClient{color}{color:#00}.java:{color}{color:#098658}2043{color}{color:#00}){color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.{color}{color:#008080}ListenableFuture{color}{color:#00}.notifyListenerDirectly({color}{color:#008080}ListenableFuture{color}{color:#00}.java:{color}{color:#098658}113{color}{color:#00}){color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.{color}{color:#008080}ListenableFuture{color}{color:#00}.done({color}{color:#008080}ListenableFuture{color}{color:#00}.java:{color}{color:#098658}100{color}{color:#00}){color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.{color}{color:#008080}BaseFuture{color}{color:#00}.{color}{color:#ff}set{color}{color:#00}({color}{color:#008080}BaseFuture{color}{color:#00}.java:{color}{color:#098658}133{color}{color:#00}){color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.{color}{color:#008080}ListenableFuture{color}{color:#00}.onResponse({color}{color:#008080}ListenableFuture{color}{color:#00}.java:{color}{color:#098658}139{color}{color:#00}){color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.{color}{color:#008080}RestHighLevelClient$5{color}{color:#00}.onSuccess({color}{color:#008080}RestHighLevelClient{color}{color:#00}.java:{color}{color:#098658}2129{color}{color:#00}){color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.{color}{color:#008080}RestClient$FailureTrackingResponseListener{color}{color:#00}.onSuccess({color}{color:#008080}RestClient{color}{color:#00}.java:{color}{color:#098658}636{color}{color:#00}){color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.{color}{color:#008080}RestClient$1{color}{color:#00}.completed({color}{color:#008080}RestClient{color}{color:#00}.java:{color}{color:#098658}376{color}{color:#00}){color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.{color}{color:#008080}RestClient$1{color}{color:#00}.completed({color}{color:#008080}RestClient{color}{color:#00}.java:{color}{color:#098658}370{color}{color:#00}){color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.concurrent.{color}{color:#008080}BasicFuture{color}{color:#00}.completed({color}{color:#008080}BasicFuture{color}{color:#00}.java:{color}{color:#098658}122{color}{color:#00}){color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.{color}{color:#008080}DefaultClientExchangeHandlerImpl{color}{color:#00}.responseCompleted({color}{color:#008080}DefaultClientExchangeHandlerImpl{color}{color:#00}.java:{color}{color:#098658}181{color}{color:#00}){color}
{color:#00} at 
org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.{color}{color:#008080}HttpAsyncRequestExecutor{color}{color:#00}.processResponse({color}{color:#008080}HttpAsyncRequestExecutor{color}{color:#00}.java:{color}{color:#098658}448{color}{color:#00}){color}
{color:#00} at