hi
GenericInMemoryCatalog does not support settings now,
or you can refer to [1] for supported catalog details
and you can refer to [2] to supported types details.
"Kafka schema registry for schema" is under discussion [3],
which can be ready in 1.12.
sql client supports DDL to create a table with json format [4],
you can use ROW type to define nested json.
for example:
create table my_table (
f varchar,
nest_column row<
a varchar,
b int,
c int
>
) with (
...
)
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/catalogs.html#catalogs
[2]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/types.html
[3] https://issues.apache.org/jira/browse/FLINK-16048
[4]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/formats/json.html#how-to-create-a-table-with-json-format
Best,
Godfrey
Lian Jiang <[email protected]> 于2020年7月18日周六 上午6:28写道:
> Hi,
>
> I am experimenting Flink SQL by following
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/sqlClient.html.
> I want to set up an environment yaml to query Kafka data (json in avro
> format). Where can I find the information below?
>
> 1. use GenericInMemoryCatalog (e.g. type, settings)
> 2. use Kafka schema registry for schema. The example hard code the schema
> in env yaml.
> 3. other than UDF, is there a way to easily query a deeply nested json in
> Flink SQL?
>
> Appreciate your help!
>
> Regards
> Lian
>
>
>
>
>