Hi,

 

I keep getting exceptions
"org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: Too many
ongoing snapshots. Increase kafka producers pool size or decrease number of
concurrent checkpoints."

 

I think that DEFAULT_KAFKA_PRODUCERS_POOL_SIZE is 5 and need to increase
this size. What considerations should I take to increase this size?

 

I have a check point setting like this and run a parallelism of 16 and have
a check point setting like this

 

public static void setup(StreamExecutionEnvironment env) {
    env.enableCheckpointing(2_000);
 
env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONC
E);
    env.getCheckpointConfig().setMinPauseBetweenCheckpoints(1_000);
    env.getCheckpointConfig().setCheckpointTimeout(10_000);
    env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
    env.setStateBackend(new MemoryStateBackend(Integer.MAX_VALUE));
 
//env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.E
xternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
}

 

Regards,

 

Min

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to