Hi All,

I am not sure if this was already discussed in this forum.
In our set up with 1.16.0 flink we have ensured that the setup has all the
necessary things for Hive catalog to work.

The flink dialect works fine functionally (with some issues will come to
that later).

But when i follow the steps here in
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/hive-compatibility/hive-dialect/queries/overview/#examples
I am getting an exception once i set to hive dialect
        at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
[flink-sql-client-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
Caused by: java.lang.ClassCastException: class
jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class
java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader
and java.net.URLClassLoader are in module java.base of loader 'bootstrap')
        at
org.apache.hadoop.hive.ql.session.SessionState.<init>(SessionState.java:413)
~[flink-sql-connector-hive-3.1.2_2.12-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
        at
org.apache.hadoop.hive.ql.session.SessionState.<init>(SessionState.java:389)
~[flink-sql-connector-hive-3.1.2_2.12-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
        at
org.apache.flink.table.planner.delegation.hive.HiveSessionState.<init>(HiveSessionState.java:80)
~[flink-sql-connector-hive-3.1.2_2.12-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
        at
org.apache.flink.table.planner.delegation.hive.HiveSessionState.startSessionState(HiveSessionState.java:128)
~[flink-sql-connector-hive-3.1.2_2.12-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
        at
org.apache.flink.table.planner.delegation.hive.HiveParser.parse(HiveParser.java:210)
~[flink-sql-connector-hive-3.1.2_2.12-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
        at
org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:172)
~[flink-sql-client-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]

I have ensured the dialect related steps are completed followed including
https://issues.apache.org/jira/browse/FLINK-25128

In the flink catalog - if we create a table
> CREATE TABLE testsource(
>
>  `date` STRING,
>  `geo_altitude` FLOAT
> )
> PARTITIONED by ( `date`)
>
> WITH (
>
> 'connector' = 'hive',
> 'sink.partition-commit.delay'='1 s',
> 'sink.partition-commit.policy.kind'='metastore,success-file'
> );

The parition always gets created on the last set of columns and not on the
columns that we specify. Is this a known bug?

Regards
Ram

Reply via email to