I'm a big -1 on null values for invalid casts. This can lead to a lot of
even more unexpected errors and runtime behavior since null is
1. Not allowed in all schemas (Leading to a runtime error anyway)
2. Is the same as delete in some systems (leading to data loss)
And this would be dependent on
Hi, all
+1 for implementing this new store cast mode.
>From a viewpoint of DBMS users, this cast is pretty common for INSERTs and
I think this functionality could
promote migrations from existing DBMSs to Spark.
The most important thing for DBMS users is that they could optionally
choose this
Great thanks - we can take this to JIRAs now.
I think it's worth changing the implementation of atanh if the test value
just reflects what Spark does, and there's evidence is a little bit
inaccurate.
There's an equivalent formula which seems to have better accuracy.
On Fri, Jul 26, 2019 at 10:02
Hi Ryan,
Thanks for the suggestions on the proposal and doc.
Currently, there is no data type validation in table insertion of V1. We
are on the same page that we should improve it. But using UpCast is from
one extreme to another. It is possible that many queries are broken after
upgrading to