Hi all,

Is there a way of forcing a pipeline to use as many slots as possible?  I have 
a program in PyFlink using SQL and the Table API and currently all of my 
pipeline is using just a single slot.  I've tried this:

      
StreamExecutionEnvironment.get_execution_environment().disable_operator_chaining()

I did have a pipeline which had 17 different tasks running in just a single 
slot, but all this does is give me 79 different operators but all are still 
running in a single slot. Is there a way to get Flink to run different jobs in 
different slots whilst using the Table API and SQL?

Many thanks,

John

Reply via email to