You can run them in a localenvironment. I do it for my integration tests 
everytime:
flinkEnvironment = ExecutionEnvironment.createLocalEnvironment(1)

Eg (even together with a local HDFS cluster) 
https://github.com/ZuInnoTe/hadoopcryptoledger/blob/master/examples/scala-flink-ethereumblock/src/it/scala/org/zuinnote/flink/ethereum/example/FlinkEthereumBlockCounterFlinkMasterIntegrationSpec.scala

> On 22. Jul 2018, at 12:42, Pavel Ciorba <pavli...@gmail.com> wrote:
> 
> Hi all!
> 
> From what I know, Flink jobs can be run straight from the IDE because IDEA 
> will create a mini Flink runtime.
> 
> What is the underlying CLI command that Jetbarins IDEA issues to run a Flink 
> job in a mini-runtime?
> 
> My use case is that I want to see if the job written using the SQL API is 
> valid. So the plan is to run the job on a VM in a mini Flink runtime to see 
> whether it will throw an exception or not. If it is healthy, deploy it in the 
> main cluster.
> 
> Maybe you have some advice about this, or just an answer to the first 
> question :)
> 
> Thanks!

Reply via email to