+1
Bests,
Dongjoon
On Thu, Oct 10, 2019 at 10:14 Ryan Blue wrote:
> +1
>
> Thanks for fixing this!
>
> On Thu, Oct 10, 2019 at 6:30 AM Xiao Li wrote:
>
>> +1
>>
>> On Thu, Oct 10, 2019 at 2:13 AM Hyukjin Kwon wrote:
>>
>>> +1 (binding)
>>>
>>> 2019년 10월 10일 (목) 오후 5:11, Takeshi Yamamuro 님이
+1
Thanks for fixing this!
On Thu, Oct 10, 2019 at 6:30 AM Xiao Li wrote:
> +1
>
> On Thu, Oct 10, 2019 at 2:13 AM Hyukjin Kwon wrote:
>
>> +1 (binding)
>>
>> 2019년 10월 10일 (목) 오후 5:11, Takeshi Yamamuro 님이 작성:
>>
>>> Thanks for the great work, Gengliang!
>>>
>>> +1 for that.
>>> As I said
+1
On Thu, Oct 10, 2019 at 2:13 AM Hyukjin Kwon wrote:
> +1 (binding)
>
> 2019년 10월 10일 (목) 오후 5:11, Takeshi Yamamuro 님이 작성:
>
>> Thanks for the great work, Gengliang!
>>
>> +1 for that.
>> As I said before, the behaviour is pretty common in DBMSs, so the change
>> helps for DMBS users.
>>
>>
+1 (binding)
2019년 10월 10일 (목) 오후 5:11, Takeshi Yamamuro 님이 작성:
> Thanks for the great work, Gengliang!
>
> +1 for that.
> As I said before, the behaviour is pretty common in DBMSs, so the change
> helps for DMBS users.
>
> Bests,
> Takeshi
>
>
> On Mon, Oct 7, 2019 at 5:24 PM Gengliang Wang <
>
Thanks for the great work, Gengliang!
+1 for that.
As I said before, the behaviour is pretty common in DBMSs, so the change
helps for DMBS users.
Bests,
Takeshi
On Mon, Oct 7, 2019 at 5:24 PM Gengliang Wang
wrote:
> Hi everyone,
>
> I'd like to call for a new vote on SPARK-28885
>
+1 (non-binding). Sounds good to me
On Mon, Oct 7, 2019 at 11:58 PM Wenchen Fan wrote:
> +1
>
> I think this is the most reasonable default behavior among the three.
>
> On Mon, Oct 7, 2019 at 6:06 PM Alessandro Solimando <
> alessandro.solima...@gmail.com> wrote:
>
>> +1 (non-binding)
>>
>> I
+1
I think this is the most reasonable default behavior among the three.
On Mon, Oct 7, 2019 at 6:06 PM Alessandro Solimando <
alessandro.solima...@gmail.com> wrote:
> +1 (non-binding)
>
> I have been following this standardization effort and I think it is sound
> and it provides the needed
+1 (non-binding)
I have been following this standardization effort and I think it is sound
and it provides the needed flexibility via the option.
Best regards,
Alessandro
On Mon, 7 Oct 2019 at 10:24, Gengliang Wang
wrote:
> Hi everyone,
>
> I'd like to call for a new vote on SPARK-28885
>
standard clarification.
>>
>> I think we can re-visit this proposal and restart the vote
>>
>> --
>> *From:* Ryan Blue
>> *Sent:* Friday, September 6, 2019 5:28 PM
>> *To:* Alastair Green
>> *Cc:* Reynold Xin; Wenchen Fan; Spark dev list; Gengl
5:28 PM
> *To:* Alastair Green
> *Cc:* Reynold Xin; Wenchen Fan; Spark dev list; Gengliang Wang
> *Subject:* Re: [VOTE][SPARK-28885] Follow ANSI store assignment rules in
> table insertion by default
>
>
> We discussed this thread quite a bit in the DSv2 sync up and Russell
>
: Reynold Xin; Wenchen Fan; Spark dev list; Gengliang Wang
Subject: Re: [VOTE][SPARK-28885] Follow ANSI store assignment rules in table
insertion by default
We discussed this thread quite a bit in the DSv2 sync up and Russell brought up
a really good point about this.
The ANSI rule used here
We discussed this thread quite a bit in the DSv2 sync up and Russell
brought up a really good point about this.
The ANSI rule used here specifies how to store a specific value, V, so this
is a runtime rule — an earlier case covers when V is NULL, so it is
definitely referring to a specific value.
Makes sense.
While the ISO SQL standard automatically becomes an American national (ANSI)
standard, changes are only made to the International (ISO/IEC) Standard, which
is the authoritative specification.
These rules are specified in SQL/Foundation (ISO/IEC SQL Part 2), section 9.2.
Could we
Having three modes is a lot. Why not just use ansi mode as default, and legacy
for backward compatibility? Then over time there's only the ANSI mode, which is
standard compliant and easy to understand. We also don't need to invent a
standard just for Spark.
On Thu, Sep 05, 2019 at 12:27 AM,
+1
To be honest I don't like the legacy policy. It's too loose and easy for
users to make mistakes, especially when Spark returns null if a function
hit errors like overflow.
The strict policy is not good either. It's too strict and stops valid use
cases like writing timestamp values to a date
15 matches
Mail list logo