In HA mode, support the same application run multiple jobs?
In HA mode, support the same application run multiple jobs?
Flink Hudi HMS Catalog problem
Flink SQL reads and writes Hudi and synchronizes Hive tables via the Hudi HMS Catalog,If the hive database has both the parquet table and the hudi table, two different flink catalogs need to be registered, causing problems. Not very friendly for data analysts to use. Yes spark does not have this problem, you can use spark_catalog catalog to access hudi and parquet tables, not sure if this problem is solved in hudi or flink?
How to add permission validation? flink reads and writes hive table。
flink supports both sql and jar types.How can we implement a unified access check in flink? spark supports extensions; flink lacks extensions.
ConfigOption Support Version
- Records the version from which each parameter was added
Support Stored procedures
Supports operations like hudi/iceberg calls, such as savepoint/ checkpoint, https://hudi.apache.org/docs/procedures/ CALL system.procedure_name(arg_1, arg_2, ... arg_n) Based on the flink development platform, direct use of call sql to complete some management operations, will be very convenient. hudi/iceberg can easily customize various table actions