Hi, Thanks a lot for the reply. And you both are right. Serializing GenericRecord without specifying schema was indeed a HUGE bottleneck in my app. I got to know it through jfr analysis and then read the blog post you mentioned. Now I am able to pump in lot more data per second. (In my test setup atleast). I am going to try this with kafka. But now it poses me a problem, that my app cannot handle schema changes automatically since at the startup flink needs to know schema. If there is a backward compatible change in upstream, new messages will not be read properly. Do you know any workarounds for this ?
-- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/