[ 
https://issues.apache.org/jira/browse/FLINK-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rinat Sharipov updated FLINK-7963:
----------------------------------
    Description: 
Hi guys, I've got an idea of a little improvement for testing flink jobs.

All my jobs are written in the following manner, I've got a context class, 
which contains details of job components and information about how to wire 
them. Also I've got a bootstrap class, that initializes this context, retrieves 
flink env from there and executes it.

This approach provides an ability to implement jobs in the same manner and 
simplify job testing. All I need, to do, when writing tests is to override 
flink env with local env and override some of job components.

Everything was well, until I wanted to enable checkpointing, and implement some 
kind of business logic, that should be called, when checkpointing is triggered. 
I understood, that I would like to test this logic, and the best approach for 
me, is to trigger savepoint on flink cluster shutdown, but, when I've looked 
through the source code, I understood, that it's quite challenging and couldn't 
be realised using only configuration.

So, I would like to discuss the further proposals:

* add ability to create local env using configuration 
`org.apache.flink.streaming.api.scala.StreamExecutionEnvironment#createLocalEnv(parallelism,
 configiuation)`, currently, using scala api we have only ability to specifiy 
parallelizm, but java api (that is used by scala api) contains such method
* add ability to trigger savepoint in flink mini cluster on `stop`, if such 
kind of property were specified in configuration

What do you sink about it ? As for me, it'll give as more flexibility in tests, 
and will not force us to use special test templates, such as 
`SavepointMigrationTestBase`

Thx



  was:
Hi guys, I've got an idea of a little improvement for testing flink jobs.
All my jobs are written in further style. 

I've got a some kind of context, in which all my components, used by job are 
initialized, also I've a some kind of a bootstrap, that wires all components 
from context, looks for a flink streaming environment component and runs job, 
using it.

This approach provides an ability to implement all jobs in the same manner and 
simplify job testing. All I need, is to override some of context components and 
use local stream env instead of stream execution environment.

Everything was quite well, until I wanted to enable checkpointing, and 
implement some kind of business logic, that is called when checkpointing is 
triggered. I understood, that I would like to test this logic, and the best 
approach for me, is to trigger savepoint on flink cluster shutdown, but, when 
I've looked through the source code, I understood, that it's quite challenging 
and couldn't be realised using configuration.

So, I would like to discuss the further proposals:

* add ability to create local env using specified configuration, add method 
`org.apache.flink.streaming.api.scala.StreamExecutionEnvironment#createLocalEnv(parallelism,
 configiuation)`
* provide an ability to trigger savepoint in flink mini cluster on `stop`, if 
such kind of property were specified in configuration

What do you sink about it ? As for me, it'll give as more flexibility in tests, 
and will not force us to use `SavepointMigrationTestBase` such kind of test 
templates

Thx




> Add ability to call trigger savepoint on flink cluster shutdown
> ---------------------------------------------------------------
>
>                 Key: FLINK-7963
>                 URL: https://issues.apache.org/jira/browse/FLINK-7963
>             Project: Flink
>          Issue Type: New Feature
>          Components: Configuration
>            Reporter: Rinat Sharipov
>
> Hi guys, I've got an idea of a little improvement for testing flink jobs.
> All my jobs are written in the following manner, I've got a context class, 
> which contains details of job components and information about how to wire 
> them. Also I've got a bootstrap class, that initializes this context, 
> retrieves flink env from there and executes it.
> This approach provides an ability to implement jobs in the same manner and 
> simplify job testing. All I need, to do, when writing tests is to override 
> flink env with local env and override some of job components.
> Everything was well, until I wanted to enable checkpointing, and implement 
> some kind of business logic, that should be called, when checkpointing is 
> triggered. I understood, that I would like to test this logic, and the best 
> approach for me, is to trigger savepoint on flink cluster shutdown, but, when 
> I've looked through the source code, I understood, that it's quite 
> challenging and couldn't be realised using only configuration.
> So, I would like to discuss the further proposals:
> * add ability to create local env using configuration 
> `org.apache.flink.streaming.api.scala.StreamExecutionEnvironment#createLocalEnv(parallelism,
>  configiuation)`, currently, using scala api we have only ability to specifiy 
> parallelizm, but java api (that is used by scala api) contains such method
> * add ability to trigger savepoint in flink mini cluster on `stop`, if such 
> kind of property were specified in configuration
> What do you sink about it ? As for me, it'll give as more flexibility in 
> tests, and will not force us to use special test templates, such as 
> `SavepointMigrationTestBase`
> Thx



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to