[
https://issues.apache.org/jira/browse/FLINK-24229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17442712#comment-17442712
]
Nir Tsruya commented on FLINK-24229:
------------------------------------
hey [~CrynetLogistics]
We are working on the AsyncSinkWriter implementation, and I can see that there
are 2 consumers, the
[fatalExceptionCons|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java#L136]
and
[requestResult|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java#L304]
. As I understand the requestResult consumer will simply add failed write
requests to the queue for retry. and the fatalExceptions consumer is used when
a fata error that should not be retried is thrown and so it will simply throw
the error.
We are interested in adding a functionality that will allow to configure the
behavior of the fatalExceptions. For example, in case of fatal error, publish
the record to a DLQ. preferably such behavior would be configurable by the user
of the AsyncSink.
What do you think?
> [FLIP-171] DynamoDB implementation of Async Sink
> ------------------------------------------------
>
> Key: FLINK-24229
> URL: https://issues.apache.org/jira/browse/FLINK-24229
> Project: Flink
> Issue Type: New Feature
> Components: Connectors / Common
> Reporter: Zichen Liu
> Assignee: Zichen Liu
> Priority: Major
> Labels: pull-request-available
> Fix For: 1.15.0
>
>
> h2. Motivation
> *User stories:*
> As a Flink user, I’d like to use DynamoDB as sink for my data pipeline.
> *Scope:*
> * Implement an asynchronous sink for DynamoDB by inheriting the
> AsyncSinkBase class. The implementation can for now reside in its own module
> in flink-connectors.
> * Implement an asynchornous sink writer for DynamoDB by extending the
> AsyncSinkWriter. The implementation must deal with failed requests and retry
> them using the {{requeueFailedRequestEntry}} method. If possible, the
> implementation should batch multiple requests (PutRecordsRequestEntry
> objects) to Firehose for increased throughput. The implemented Sink Writer
> will be used by the Sink class that will be created as part of this story.
> * Java / code-level docs.
> * End to end testing: add tests that hits a real AWS instance. (How to best
> donate resources to the Flink project to allow this to happen?)
> h2. References
> More details to be found
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]
--
This message was sent by Atlassian Jira
(v8.20.1#820001)