Dear all,
I am using Amazon AWS Lambda functions to produce messages to a Kafka cluster.
As I can not control how frequently a Lambda function is initiated/invoked and
I can not share object between invocations - I have to create a new Kafka
producer for each invocation and clean it up after t
My kafka (1.0.0) producer errors out on large (16M) messages.
ERROR Error when sending message to topic test with key: null, value:
16777239 bytes with error: (org.apache.kafka.clients.producer.internals.
ErrorLoggingCallback)
org.apache.kafka.common.errors.RecordTooLargeException: The message is