[ 
https://issues.apache.org/jira/browse/FLINK-9808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16661178#comment-16661178
 ] 

ASF GitHub Bot commented on FLINK-9808:
---------------------------------------

tzulitai commented on a change in pull request #6875: [FLINK-9808] [state 
backends] Migrate state when necessary in state backends
URL: https://github.com/apache/flink/pull/6875#discussion_r227538700
 
 

 ##########
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/state/StateBackendMigrationTestBase.java
 ##########
 @@ -0,0 +1,775 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.state;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.state.ValueState;
+import org.apache.flink.api.common.state.ValueStateDescriptor;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.common.typeutils.TypeSerializerSchemaCompatibility;
+import org.apache.flink.api.common.typeutils.TypeSerializerSnapshot;
+import org.apache.flink.api.common.typeutils.base.IntSerializer;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataOutputView;
+import org.apache.flink.runtime.checkpoint.CheckpointOptions;
+import org.apache.flink.runtime.checkpoint.StateObjectCollection;
+import org.apache.flink.runtime.execution.Environment;
+import org.apache.flink.runtime.operators.testutils.DummyEnvironment;
+import org.apache.flink.types.StringValue;
+import org.apache.flink.util.TestLogger;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.RunnableFuture;
+
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Tests for the {@link KeyedStateBackend} and {@link OperatorStateBackend} as 
produced
+ * by various {@link StateBackend}s.
+ */
+@SuppressWarnings("serial")
+public abstract class StateBackendMigrationTestBase<B extends 
AbstractStateBackend> extends TestLogger {
+
+       @Rule
+       public final ExpectedException expectedException = 
ExpectedException.none();
+
+       // lazily initialized stream storage
+       private CheckpointStorageLocation checkpointStorageLocation;
+
+       /**
+        * Different "personalities" of {@link CustomStringSerializer}. Instead 
of creating
+        * different classes we parameterize the serializer with this and
+        * {@link CustomStringSerializerSnapshot} will instantiate serializers 
with the correct
+        * personality.
+        */
+       public enum SerializerVersion {
+               INITIAL,
+               RESTORE,
+               NEW
+       }
+
+       /**
+        * The compatibility behaviour of {@link CustomStringSerializer}. This 
controls what
+        * type of serializer {@link CustomStringSerializerSnapshot} will 
create for
+        * the different methods that return/create serializers.
+        */
+       public enum SerializerCompatibilityType {
+               COMPATIBLE_AS_IS,
+               REQUIRES_MIGRATION
+       }
+
+       /**
+        * The serialization timeliness behaviour of the state backend under 
test.
+        */
+       public enum BackendSerializationTimeliness {
+               ON_ACCESS,
+               ON_CHECKPOINTS
+       }
+
+       @Test
+       @SuppressWarnings("unchecked")
+       public void testValueStateWithSerializerRequiringMigration() throws 
Exception {
+               CustomStringSerializer.resetCountingMaps();
+
+               CheckpointStreamFactory streamFactory = createStreamFactory();
+               SharedStateRegistry sharedStateRegistry = new 
SharedStateRegistry();
+               AbstractKeyedStateBackend<Integer> backend = 
createKeyedBackend(IntSerializer.INSTANCE);
+
+               ValueStateDescriptor<String> kvId = new ValueStateDescriptor<>(
+                       "id",
+                       new CustomStringSerializer(
+                               
org.apache.flink.runtime.state.StateBackendMigrationTestBase.SerializerCompatibilityType.REQUIRES_MIGRATION,
 
org.apache.flink.runtime.state.StateBackendMigrationTestBase.SerializerVersion.INITIAL));
 
 Review comment:
   Fixed

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Implement state conversion procedure in state backends
> ------------------------------------------------------
>
>                 Key: FLINK-9808
>                 URL: https://issues.apache.org/jira/browse/FLINK-9808
>             Project: Flink
>          Issue Type: Sub-task
>          Components: State Backends, Checkpointing
>            Reporter: Tzu-Li (Gordon) Tai
>            Assignee: Aljoscha Krettek
>            Priority: Critical
>              Labels: pull-request-available
>             Fix For: 1.7.0
>
>
> With FLINK-9377 in place and that config snapshots serve as the single source 
> of truth for recreating restore serializers, the next step would be to 
> utilize this when performing a full-pass state conversion (i.e., read with 
> old / restore serializer, write with new serializer).
> For Flink's heap-based backends, it can be seen that state conversion 
> inherently happens, since all state is always deserialized after restore with 
> the restore serializer, and written with the new serializer on snapshots.
> For the RocksDB state backend, since state is lazily deserialized, state 
> conversion needs to happen for per-registered state on their first access if 
> the registered new serializer has a different serialization schema than the 
> previous serializer.
> This task should consist of three parts:
> 1. Allow {{CompatibilityResult}} to correctly distinguish between whether the 
> new serializer's schema is a) compatible with the serializer as it is, b) 
> compatible after the serializer has been reconfigured, or c) incompatible.
> 2. Introduce state conversion procedures in the RocksDB state backend. This 
> should occur on the first state access.
> 3. Make sure that all other backends no longer do redundant serializer 
> compatibility checks. That is not required because those backends always 
> perform full-pass state conversions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to