Hi I'm testing Flink to do stream processing, in my use case there are multiples pipelines processing messages from multiple Kafka sources. I have some questions regarding the jobs and slots.
1) When I deploy a new job, it takes a job slot in the TM, the job never ends (I think it doesn't end because is a stream pipeline), and the slot is never released, this means that the slot is busy even when no new messages are coming from the Kafka topic. Is that OK or I'm doing something wrong? Is there a way to do a more efficient utilization of the job slots? 2) In my use case, I need good job scalability. Potentially I could have many pipelines running in the Flink environment, but on the other hand, increase latency would not be a serious problem for me. There are some recommendations regarding memory for slot? I saw that the CPU recommendation is a core per slot, taking into account that increase the latency would not be a big problem, do you see another good reason to follow this recommendation? Thank you Regards