Hi dev,

I'd like to kick off a discussion on a mechanism to validate the precision
of columns for some connectors.

We come to an agreement that the user should be informed if the connector
does not support the desired precision. And from the connector developer's
view, there are 3-levels information to be considered:

   -  the ability of external systems (e.g. Apache Derby support
   TIMESTAMP(9), Mysql support TIMESTAMP(6), etc)

Connector developers should use this information to validate user's DDL and
make sure throw an exception if concrete column is out of range.


   - schema of referenced tables in external systems

If the schema information of referenced tables is available in Compile
Time, connector developers could use it to find the mismatch between DDL.
But in most cases, the schema information is unavailable because of network
isolation or authority management. We should use it with caution.


   - schema-less external systems (e.g. HBase)

If the external systems is schema-less like HBase, the connector developer
should make sure the connector doesn't cause precision loss (e.g.
flink-hbase serializes java.sql.Timestamp to long in bytes which only keep
millisecond's precision.)

To make it more specific, some scenarios of JDBC Connector are list as
following:

   - The underlying DB supports DECIMAL(65, 30), which is out of the range
   of Flink's Decimal
   - The underlying DB supports TIMESTAMP(6), and user want to define a
   table with TIMESTAMP(9) in Flink
   - User defines a table with DECIMAL(10, 4) in underlying DB, and want to
   define a table with DECIMAL(5, 2) in Flink
   - The precision of the underlying DB varies between different versions


What do you think about this? any feedback are appreciates.

*Best Regards,*
*Zhenghua Gao*

Reply via email to