HI,
In what scenarios we hit with *java.lang.OutOfMemoryError: Java heap space
while publishing to kafka . I hit with this exception and a resolution
added property *.setProperty("security.protocol","SSL");in the flink
application.

Later I started encountering org.apache.kafka.common.errors.TimeoutException:
Failed to update metadata after 60000 ms.

The flink applications consume data from topic and processes into 3 kafka
topics. and some one throws some insights on this.


I face this expectation intermittently and the jobs terminates.

I am using FlinkKafkaProducer010 with these properties set

producerConfig.setProperty(COMPRESSION_TYPE_CONFIG, CompressionType.LZ4.
name);
producerConfig.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapAddress);
producerConfig.setProperty(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, "300000
");
producerConfig.setProperty(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, "2000000"
);
producerConfig.setProperty(ProducerConfig.BATCH_SIZE_CONFIG,"52428800");
producerConfig.setProperty(ProducerConfig.LINGER_MS_CONFIG, "900");
producerConfig.setProperty(ProducerConfig.BUFFER_MEMORY_CONFIG, "524288000"
);
producerConfig.setProperty(ProducerConfig.RETRY_BACKOFF_MS_CONFIG, "190000"
);
producerConfig.setProperty(ProducerConfig.ACKS_CONFIG, "0");
producerConfig.setProperty(ProducerConfig.RETRIES_CONFIG, "2147483647");
producerConfig.setProperty("security.protocol","SSL");

Reply via email to