Bumping the thread.

On Fri, Aug 4, 2023 at 12:51 PM Ashish Khatkar <akhat...@yelp.com> wrote:

> Hi all,
>
> We are using flink-1.17.0 table API and RocksDB as backend to provide a
> service to our users to run sql queries. The tables are created using the
> avro schema and when the schema is changed in a compatible manner i.e
> adding a field with default, we are unable to recover the job from the
> savepoint. This is mentioned in the flink doc on evolution [1] as well.
>
> Are there any plans to support schema evolution in the table API? Our
> current approach involves rebuilding the entire state by discarding the
> output and then utilizing that state in the actual job. This is already
> done for table-store [2]
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/concepts/overview/#stateful-upgrades-and-evolution
> [2]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-226%3A+Introduce+Schema+Evolution+on+Table+Store
>
>
>

Reply via email to