Hi Ryan,
Thanks for the explanation! This shed lights on areas but also triggered
some questions.
The main conclusion to me on the Kafka connector side is to keep the v1 as
default. Let the users some time to migrate to v2 and later delete v1 when
its stable (which makes sense from my
Hi Gabor,
First, a little context... one of the goals of DSv2 is to standardize the
behavior of SQL operations in Spark. For example, running CTAS when a table
exists will fail, not take some action depending on what the source
chooses, like drop & CTAS, inserting, or failing.
Unfortunately,
Hi All,
I've taken a look at the code and docs to find out when DSv1 sources has
to be removed (in case of DSv2 replacement is implemented). After some
digging I've found DSv1 sources which are already removed but in some cases
v1 and v2 still exists in parallel.
Can somebody please tell me