Re: Upgrading Flink

2020-04-14 Thread Chesnay Schepler
The only guarantee that Flink provides is that any /jar/ working against 
Public API's will continue to work without recompilation.


There are no compatibility guarantees between clients<->server of 
different versions.


On 14/04/2020 20:02, David Anderson wrote:
@Chesnay Flink doesn't seem to guarantee client-jobmanager 
compability, even for bug-fix releases. For example, some jobs 
compiled with 1.9.0 don't work with a cluster running 1.9.2. See 
https://github.com/ververica/sql-training/issues/8#issuecomment-590966210 for 
an example of a case when recompiling was necessary.


Does the Flink project have an explicit policy as to when recompiling 
can be required?



On Tue, Apr 14, 2020 at 2:38 PM Sivaprasanna 
mailto:sivaprasanna...@gmail.com>> wrote:


Ideally if the underlying cluster where the job is being deployed
changes (1.8.x to 1.10.x ), it is better to update your project
dependencies to the new version (1.10.x), and hence you need to
recompile the jobs.

On Tue, Apr 14, 2020 at 3:29 PM Chesnay Schepler
mailto:ches...@apache.org>> wrote:

@Robert Why would he have to recompile the jobs? Shouldn't he
be fine soo long as he isn't using any API for which we broke
binary-compatibility?

On 09/04/2020 09:55, Robert Metzger wrote:

Hey Stephen,

1. You should be able to migrate from 1.8 to 1.10:

https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/upgrading.html#compatibility-table

2. Yes, you need to recompile (but ideally you don't need to
change anything).



On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly
mailto:stephen.alan.conno...@gmail.com>> wrote:

Quick questions on upgrading Flink.

All our jobs are compiled against Flink 1.8.x

We are planning to upgrade to 1.10.x

1. Is the recommended path to upgrade one minor at a
time, i.e. 1.8.x -> 1.9.x and then 1.9.x -> 1.10.x as a
second step or is the big jump supported, i.e. 1.8.x ->
1.10.x in one change

2. Do we need to recompile the jobs against the newer
Flink version before upgrading? Coordinating multiple
teams can be tricky, so - short of spinning up a second
flink cluster - our continuous deployment infrastructure
will try to deploy the topologies compiled against 1.8.x
for an hour or two after we have upgraded the cluster







Re: Upgrading Flink

2020-04-14 Thread David Anderson
@Chesnay Flink doesn't seem to guarantee client-jobmanager compability,
even for bug-fix releases. For example, some jobs compiled with 1.9.0 don't
work with a cluster running 1.9.2. See
https://github.com/ververica/sql-training/issues/8#issuecomment-590966210 for
an example of a case when recompiling was necessary.

Does the Flink project have an explicit policy as to when recompiling can
be required?


On Tue, Apr 14, 2020 at 2:38 PM Sivaprasanna 
wrote:

> Ideally if the underlying cluster where the job is being deployed changes
> (1.8.x to 1.10.x ), it is better to update your project dependencies to the
> new version (1.10.x), and hence you need to recompile the jobs.
>
> On Tue, Apr 14, 2020 at 3:29 PM Chesnay Schepler 
> wrote:
>
>> @Robert Why would he have to recompile the jobs? Shouldn't he be fine soo
>> long as he isn't using any API for which we broke binary-compatibility?
>>
>> On 09/04/2020 09:55, Robert Metzger wrote:
>>
>> Hey Stephen,
>>
>> 1. You should be able to migrate from 1.8 to 1.10:
>> https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/upgrading.html#compatibility-table
>>
>> 2. Yes, you need to recompile (but ideally you don't need to change
>> anything).
>>
>>
>>
>> On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly <
>> stephen.alan.conno...@gmail.com> wrote:
>>
>>> Quick questions on upgrading Flink.
>>>
>>> All our jobs are compiled against Flink 1.8.x
>>>
>>> We are planning to upgrade to 1.10.x
>>>
>>> 1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x ->
>>> 1.9.x and then 1.9.x -> 1.10.x as a second step or is the big jump
>>> supported, i.e. 1.8.x -> 1.10.x in one change
>>>
>>> 2. Do we need to recompile the jobs against the newer Flink version
>>> before upgrading? Coordinating multiple teams can be tricky, so - short of
>>> spinning up a second flink cluster - our continuous deployment
>>> infrastructure will try to deploy the topologies compiled against 1.8.x for
>>> an hour or two after we have upgraded the cluster
>>>
>>
>>


Re: Upgrading Flink

2020-04-14 Thread Sivaprasanna
Ideally if the underlying cluster where the job is being deployed changes
(1.8.x to 1.10.x ), it is better to update your project dependencies to the
new version (1.10.x), and hence you need to recompile the jobs.

On Tue, Apr 14, 2020 at 3:29 PM Chesnay Schepler  wrote:

> @Robert Why would he have to recompile the jobs? Shouldn't he be fine soo
> long as he isn't using any API for which we broke binary-compatibility?
>
> On 09/04/2020 09:55, Robert Metzger wrote:
>
> Hey Stephen,
>
> 1. You should be able to migrate from 1.8 to 1.10:
> https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/upgrading.html#compatibility-table
>
> 2. Yes, you need to recompile (but ideally you don't need to change
> anything).
>
>
>
> On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly <
> stephen.alan.conno...@gmail.com> wrote:
>
>> Quick questions on upgrading Flink.
>>
>> All our jobs are compiled against Flink 1.8.x
>>
>> We are planning to upgrade to 1.10.x
>>
>> 1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x ->
>> 1.9.x and then 1.9.x -> 1.10.x as a second step or is the big jump
>> supported, i.e. 1.8.x -> 1.10.x in one change
>>
>> 2. Do we need to recompile the jobs against the newer Flink version
>> before upgrading? Coordinating multiple teams can be tricky, so - short of
>> spinning up a second flink cluster - our continuous deployment
>> infrastructure will try to deploy the topologies compiled against 1.8.x for
>> an hour or two after we have upgraded the cluster
>>
>
>


Re: Upgrading Flink

2020-04-14 Thread Chesnay Schepler
@Robert Why would he have to recompile the jobs? Shouldn't he be fine 
soo long as he isn't using any API for which we broke binary-compatibility?


On 09/04/2020 09:55, Robert Metzger wrote:

Hey Stephen,

1. You should be able to migrate from 1.8 to 1.10: 
https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/upgrading.html#compatibility-table


2. Yes, you need to recompile (but ideally you don't need to change 
anything).




On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly 
> wrote:


Quick questions on upgrading Flink.

All our jobs are compiled against Flink 1.8.x

We are planning to upgrade to 1.10.x

1. Is the recommended path to upgrade one minor at a time, i.e.
1.8.x -> 1.9.x and then 1.9.x -> 1.10.x as a second step or is the
big jump supported, i.e. 1.8.x -> 1.10.x in one change

2. Do we need to recompile the jobs against the newer Flink
version before upgrading? Coordinating multiple teams can be
tricky, so - short of spinning up a second flink cluster - our
continuous deployment infrastructure will try to deploy the
topologies compiled against 1.8.x for an hour or two after we have
upgraded the cluster





Re: Upgrading Flink

2020-04-09 Thread Robert Metzger
Hey Stephen,

1. You should be able to migrate from 1.8 to 1.10:
https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/upgrading.html#compatibility-table

2. Yes, you need to recompile (but ideally you don't need to change
anything).



On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly <
stephen.alan.conno...@gmail.com> wrote:

> Quick questions on upgrading Flink.
>
> All our jobs are compiled against Flink 1.8.x
>
> We are planning to upgrade to 1.10.x
>
> 1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x ->
> 1.9.x and then 1.9.x -> 1.10.x as a second step or is the big jump
> supported, i.e. 1.8.x -> 1.10.x in one change
>
> 2. Do we need to recompile the jobs against the newer Flink version before
> upgrading? Coordinating multiple teams can be tricky, so - short of
> spinning up a second flink cluster - our continuous deployment
> infrastructure will try to deploy the topologies compiled against 1.8.x for
> an hour or two after we have upgraded the cluster
>