vtkhanh commented on PR #105:
URL: 
https://github.com/apache/flink-connector-aws/pull/105#issuecomment-1764481474

   > Hi @vtkhanh, Could you please describe the use case more. It feels like an 
anti-pattern to use the same stream as source and sink. Kinesis Table API 
source and sink implementations were intentionally separated post 1.15.
   
   I dont have a concrete usecase in production which a stream is used for both 
read from and write to within a Flink job. But we can find it useful in testing 
when we want to populate random data into a stream, and then read from it, by 
using one table definition, instead of 2 separate ones.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to