Thank you, Sean and all.

One decision was made swiftly today.

I believe that we can move forward case-by-case for the others until the
feature freeze (3.0 branch cut).

Bests,
Dongjoon.

On Mon, Jul 8, 2019 at 13:03 Marco Gaido <marcogaid...@gmail.com> wrote:

> Hi Sean,
>
> Thanks for bringing this up. Honestly, my opinion is that Spark should be
> fully ANSI SQL compliant. Where ANSI SQL compliance is not an issue, I am
> fine following any other DB. IMHO, we won't get anyway 100% compliance with
> any DB - postgres in this case (e.g. for decimal operations, we are
> following SQLServer, and postgres behaviour would be very hard to meet) -
> so I think it is fine that PMC members decide for each feature whether it
> is worth to support it or not.
>
> Thanks,
> Marco
>
> On Mon, 8 Jul 2019, 20:09 Sean Owen, <sro...@apache.org> wrote:
>
>> See the particular issue / question at
>> https://github.com/apache/spark/pull/24872#issuecomment-509108532 and
>> the larger umbrella at
>> https://issues.apache.org/jira/browse/SPARK-27764 -- Dongjoon rightly
>> suggests this is a broader question.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>>
>>

Reply via email to