[ https://issues.apache.org/jira/browse/FLINK-30998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687932#comment-17687932 ]
Andriy Redko commented on FLINK-30998: -------------------------------------- Thanks [~lilyevsky] , looking into it > Also I noticed on the branch you still have Flink version as 1.16.0, while in > main it is 1.16.1, so probably you are gong to correct that. Hm ... this should not be the case [https://github.com/apache/flink-connector-opensearch/commit/17f5fcafdb393b0b460cbe5e56906e24576221f1] , could you please point out where you still see 1.16.0? > Also question: are you going to maintain two variants of this connector? One > for Opensearch 1.3.0 and another for 2.5.0? The OpenSearch connector should work with 1.x and 2.x clusters. The 1.x is the baseline since 2.x clients need JDK-11 at least, Apache Flink has JDK-8 baseline. > Add optional exception handler to flink-connector-opensearch > ------------------------------------------------------------ > > Key: FLINK-30998 > URL: https://issues.apache.org/jira/browse/FLINK-30998 > Project: Flink > Issue Type: Improvement > Components: Connectors / Opensearch > Affects Versions: 1.16.1 > Reporter: Leonid Ilyevsky > Priority: Major > > Currently, when there is a failure coming from Opensearch, the > FlinkRuntimeException is thrown from OpensearchWriter.java code (line 346). > This makes the Flink pipeline fail. There is no way to handle the exception > in the client code. > I suggest to add an option to set a failure handler, similar to the way it is > done in elasticsearch connector. This way the client code has a chance to > examine the failure and handle it. > Here is the use case example when it will be very useful. We are using > streams on Opensearch side, and we are setting our own document IDs. > Sometimes these IDs are duplicated; we need to ignore this situation and > continue (this way it works for us with Elastisearch). > However, with opensearch connector, the error comes back, saying that the > batch failed (even though most of the documents were indexed, only the ones > with duplicated IDs were rejected), and the whole flink job fails. -- This message was sent by Atlassian Jira (v8.20.10#820010)