[
https://issues.apache.org/jira/browse/FLINK-37343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hong Liang Teoh updated FLINK-37343:
------------------------------------
Fix Version/s: aws-connector-5.1.0
> Support for Dynamic Table Selection in DynamoDB Sink Connector
> --------------------------------------------------------------
>
> Key: FLINK-37343
> URL: https://issues.apache.org/jira/browse/FLINK-37343
> Project: Flink
> Issue Type: New Feature
> Components: Connectors / DynamoDB
> Reporter: Alex Aranovsky
> Priority: Not a Priority
> Fix For: aws-connector-5.1.0
>
>
> Hey, we have a use-case where we need to sink records into multiple ddb
> tables without a job reset (topology change), we use the DynamicKafka source
> and we've essentially forked the regular ddb connector and added support for
> providing a table as part of the ElementsConverter interface. Since DDB
> batchWriteItem supports multiple items in a single request, I don't really
> see a downside into not including it. There is some serdes cost associated
> with providing the table name as a string on each element we provide to the
> sink, I'm not sure how significant this is.
> This is both a question and a feature request: I can merge the fork upstream,
> is this something that makes sense to include in the default connector?
> Is the serdes cost negligible?
> Can I add the table name field as part of the DynamoDbWriteRequest class? it
> will be a breaking change.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)