[ 
https://issues.apache.org/jira/browse/FLINK-5051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15693679#comment-15693679
 ] 

ASF GitHub Bot commented on FLINK-5051:
---------------------------------------

GitHub user StefanRRichter opened a pull request:

    https://github.com/apache/flink/pull/2863

    [FLINK-5051] Backwards compatibility for serializers in backend state

    This PR sits on top of PR #2781 and introduces backwards compatibility for 
state serializers in keyed backends. We do so by providing version 
compatibility checking for ``TypeSerializer`` and making the serializers 
mandatory part of a keyed backend's meta data in checkpoints (so that we have 
everything required to reconstruct states in a self contained way). A 
serialization proxy is introduced for keyed backend state. Currently this 
serialization proxy covers the meta data, not yet the actual data. As the PR 
essentially moves functionality to a different place, it is already covered by 
existing tests.
    
    Notice: we should introduce a similar approach for 
``OperatorStateBackend``s.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/StefanRRichter/flink 
serializer-backwards-compatibility

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/2863.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2863
    
----
commit a373585c2fe71b467f49f0e295dc647b43ab7a9c
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-01T11:29:01Z

    Backwards compatibility 1.1 -> 1.2

commit 8e4e4bcede50e66a95928ec854e51d45a7df28bf
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-09T13:54:35Z

    Removing some unecessary code from migration classes

commit 78bd66fade7f836eafbab978329caf1ea26f2ffc
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-09T17:21:13Z

    MultiStreamStateHandle

commit a9355679c3476dd890b54312e1696b61c7839873
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-10T13:18:55Z

    Added migration unit test

commit d079bd4bdb762c307a3c5cd084590804b90996b1
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-10T13:45:58Z

    rebase fixes

commit 9f47bac9c25fc33993c3942a57462039cc578dcd
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-11T13:46:39Z

    Minor cleanups: deleting more unnecessary classes

commit 2bbe66386d28c7914c62e2c3829ff3ab6840164c
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-23T13:15:33Z

    Versioned serialization

commit 6460e27717ab208aada988ba2c83d5628b31b310
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-23T17:59:45Z

    Common meta info introduced to keyed backends

commit e7d66377730339523bad8e3e6e75865ea5a29a6b
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-23T21:40:26Z

    Introducing isCompatibleWith to TypeSerializers

commit 89e3779d231fd0dadb01782791c92ec8ebb15a81
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-23T22:33:42Z

    Splitting / Introducing interface for versiond and compatibile

commit 6714a7efd3d839befda7a9b744311494e4ecb714
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-24T10:59:01Z

    Cleanup and documentation

commit 6df300f7f5a7d7b38b00ecd6636ecd53bc15d370
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-24T11:18:43Z

    Cleanup and documentation

commit b22273455d8f9282af715e502244811543e3fb99
Author: Stefan Richter <s.rich...@data-artisans.com>
Date:   2016-11-24T16:19:51Z

    Better abstraction

----


> Backwards compatibility for serializers in backend state
> --------------------------------------------------------
>
>                 Key: FLINK-5051
>                 URL: https://issues.apache.org/jira/browse/FLINK-5051
>             Project: Flink
>          Issue Type: Improvement
>          Components: State Backends, Checkpointing
>            Reporter: Stefan Richter
>
> When a new state is register, e.g. in a keyed backend via 
> `getPartitionedState`, the caller has to provide all type serializers 
> required for the persistence of state components. Explicitly passing the 
> serializers on state creation already allows for potentiall version upgrades 
> of serializers.
> However, those serializers are currently not part of any snapshot and are 
> only provided at runtime, when the state is registered newly or restored. For 
> backwards compatibility, this has strong implications: checkpoints are not 
> self contained in that state is currently a blackbox without knowledge about 
> it's corresponding serializers. Most cases where we would need to restructure 
> the state are basically lost. We could only convert them lazily at runtime 
> and only once the user is registering the concrete state, which might happen 
> at unpredictable points.
> I suggest to adapt our solution as follows:
> - As now, all states are registered with their set of serializers.
> - Unlike now, all serializers are written to the snapshot. This makes 
> savepoints self-contained and also allows to create inspection tools for 
> savepoints at some point in the future.
> - Introduce an interface {{Versioned}} with {{long getVersion()}} and 
> {{boolean isCompatible(Versioned v)}} which is then implemented by 
> serializers. Compatible serializers must ensure that they can deserialize 
> older versions, and can then serialize them in their new format. This is how 
> we upgrade.
> We need to find the right tradeoff in how many places we need to store the 
> serializers. I suggest to write them once per parallel operator instance for 
> each state, i.e. we have a map with state_name -> tuple3<serializer<KEY>, 
> serializer<NAMESPACE>, serializer<STATE>>. This could go before all 
> key-groups are written, right at the head of the file. Then, for each file we 
> see on restore, we can first read the serializer map from the head of the 
> stream, then go through the key groups by offset.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to