I encountered a problem in the process of integrating Flink and Redisson. When
the task encounters abnormalities and keeps retries, it will cause the number
of Redis Clients to increase volatility (sometimes the number increases,
sometimes the number decreases, but the overall trend is growth). Even if I
shutdown the Redisson Instance by overwriting the close function , the number
of Redis-Clients cannot be prevented from continuing to grow, and eventually
the number of Clients will reach the upper limit and an error will be thrown.
Moreover, this situation only occurs in the Flink cluster operation mode, and
the number of Redis-Clients will remain stable in the local mode. The test code
is below. I wonder if you can provide specific reasons and solutions for this
situation, thank you.
flink version:1.13.0
redisson version:3.16.1
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.TimeCharacteristic;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import java.util.Properties;
import java.util.Random;
public class ExceptionTest {
public static void main(String[] args) throws Exception{
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment(new Configuration());
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
env.enableCheckpointing(1000 * 60);
DataStream