Hi Jark and Jingsong,
Thanks for your reply! Since modifying the SQL type system needs a lot of
work, I agree that we should postpone this until we get more requests from
users.
For my own case, according to the domain knowledge, I think a precision of
38 would be enough (though the fields were
Hi Xingcan,
As a workaround, can we convert large decimal to varchar?
If Flink SQL wants to support large decimal, we should investigate
other big data and databases. As Jark said, this needs a lot of work.
Best,
Jingsong Lee
On Tue, Aug 31, 2021 at 11:16 AM Jark Wu wrote:
>
> Hi Xingcan,
Hi Xingcan, Timo,
Yes, flink-cdc-connector and JDBC connector also don't support larger
precision or no precision.
However, we didn't receive any users reporting this problem.
Maybe it is not very common that precision is higher than 38 or without
precision.
I think it makes sense to support
Hi Timo,
Though it's an extreme case, I still think this is a hard blocker if we
would ingest data from an RDBMS (and other systems supporting large
precision numbers).
The tricky part is that users can declare numeric types without any
precision and scale restrictions in RDBMS (e.g., NUMBER in
Hi Xingcan,
in theory there should be no hard blocker for supporting this. The
implementation should be flexible enough at most locations. We just
adopted 38 from the Blink code base which adopted it from Hive.
However, this could be a breaking change for existing pipelines and we
would
Hi all,
Recently, I was trying to load some CDC data from Oracle/Postgres databases
and found that the current precision range [1, 38] for DecimalType may not
meet the requirement for some source types. For instance, in Oracle, if a
column is declared as `NUMBER` without precision and scale, the