Based on my experience in development of Spark SQL, the maintenance cost is
very small for supporting different versions of Hive metastore. Feel free
to ping me if we hit any issue about it.
Cheers,
Xiao
Reynold Xin 于2019年1月22日周二 下午11:18写道:
> Actually a non trivial fraction of users /
Actually a non trivial fraction of users / customers I interact with still use
very old Hive metastores. Because it’s very difficult to upgrade Hive metastore
wholesale (it’d require all the production jobs that access the same metastore
be upgraded at once). This is even harder than JVM
Yea, I was thinking about that too. They are too old to keep. +1 for
removing them out.
2019년 1월 23일 (수) 오전 11:30, Dongjoon Hyun 님이 작성:
> Hi, All.
>
> Currently, Apache Spark supports Hive Metastore(HMS) 0.12 ~ 2.3.
> Among them, HMS 0.x releases look very old since we are in 2019.
> If these
Hi, All.
Currently, Apache Spark supports Hive Metastore(HMS) 0.12 ~ 2.3.
Among them, HMS 0.x releases look very old since we are in 2019.
If these are not used in the production any more, can we drop HMS 0.x
supports in 3.0.0?
hive-0.12.0 2013-10-10
hive-0.13.0
Thanks for reviewing this! I'll create an SPIP doc and issue for it and
call a vote.
On Tue, Jan 22, 2019 at 11:41 AM Matt Cheah wrote:
> +1 for n-part namespace as proposed. Agree that a short SPIP would be
> appropriate for this. Perhaps also a JIRA ticket?
>
>
>
> -Matt Cheah
>
>
>
> *From:
+1 for n-part namespace as proposed. Agree that a short SPIP would be
appropriate for this. Perhaps also a JIRA ticket?
-Matt Cheah
From: Felix Cheung
Date: Sunday, January 20, 2019 at 4:48 PM
To: "rb...@netflix.com" , Spark Dev List
Subject: Re: [DISCUSS] Identifiers with
Agree, I'm not pushing for it unless there's other evidence. The closure
check does entail serialization, not just checking serializability, note.
I don't like flags either but this one sounded like it could actually be
something a user wanted to vary, globally, for runs of the same code.
On Tue,