[ https://issues.apache.org/jira/browse/FLINK-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894837#comment-16894837 ]
Caizhi Weng commented on FLINK-13438: ------------------------------------- Hi [~lirui] [~jark] and [~lzljs3620320], sorry for the late response. I actually tried to add support for DataTypes.DATE/TIME/TIMESTAMP before submitting this issue, and my patch is provided in the attachment. In this patch, when running `HiveCatalogDataTypeTest`, two tests will fail due to this issue. Please take a look. > Fix Hive connector with DataTypes.DATE/TIME/TIMESTAMP support > ------------------------------------------------------------- > > Key: FLINK-13438 > URL: https://issues.apache.org/jira/browse/FLINK-13438 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Hive > Reporter: Caizhi Weng > Priority: Blocker > Fix For: 1.9.0, 1.10.0 > > Attachments: 0001-hive.patch > > > Similar to JDBC connectors, Hive connectors communicate with Flink framework > using TableSchema, which contains DataType. As the time data read from and > write to Hive connectors must be java.sql.* types and the default conversion > class of our time data types are java.time.*, we have to fix Hive connector > with DataTypes.DATE/TIME/TIMESTAMP support. > But currently when reading tables from Hive, the table schema is created > using Hive's schema, so the time types in the created schema will be sql time > type not local time type. If user specifies a local time type in the table > schema when creating a table in Hive, he will get a different schema when > reading it out. This is undesired. -- This message was sent by Atlassian JIRA (v7.6.14#76016)