Another important aspect of this problem is whether a user is conscious of
the cast that is inserted by Spark. Most of the time, users are not aware
of casts that are implicitly inserted, and that means replacing values with
NULL would be a very surprising behavior. The impact of this choice
unsubscribe
Hi,
Are there any plans to support this?
If not, could anyone explain why it should not be supported?
It looks like it’s been done in
https://github.com/apache/spark/commit/b5c5bd98ea5e8dbfebcf86c5459bdf765f5ceb53#diff-0314224342bb8c30143ab784b3805d19
but there is no clear explanation as to
OK to push back: "disagreeing with the premise that we can afford to not be
maximal on standard 3. The correctness of the data is non-negotiable, and
whatever solution we settle on cannot silently adjust the user’s data under any
circumstances."
This blanket statement sounds great on surface,
Sorry I meant the current behavior for V2, which fails the query compilation if
the cast is not safe.
Agreed that a separate discussion about overflow might be warranted. I’m
surprised we don’t throw an error now, but it might be warranted to do so.
-Matt Cheah
From: Reynold Xin
Matt what do you mean by maximizing 3, while allowing not throwing errors when
any operations overflow? Those two seem contradicting.
On Wed, Jul 31, 2019 at 9:55 AM, Matt Cheah < mch...@palantir.com > wrote:
>
>
>
> I’m -1, simply from disagreeing with the premise that we can afford to not
I’m -1, simply from disagreeing with the premise that we can afford to not be
maximal on standard 3. The correctness of the data is non-negotiable, and
whatever solution we settle on cannot silently adjust the user’s data under any
circumstances.
I think the existing behavior is fine, or
I opened a PR https://github.com/apache/spark/pull/25310. Please take a look
2019년 7월 29일 (월) 오후 4:35, Hyukjin Kwon 님이 작성:
> Thanks, guys. Let me probably mimic the template and open a PR soon -
> currently I am stuck in some works. I will take a look in few days later.
>
> 2019년 7월 27일 (토) 오전